more News ...

Open position

May 2024: The following position is available at our institute:
PhD Position (m/f/d) in Machine Learning, Neuroinformatics and Algorithm Design

Further information

Article in Nature Communications 15, Article no. 2344 (2024)

March 2024: Christoph Stöckl, Yukun Yang, Wolfgang Maass
Local Prediction-learning in High-dimensional Spaces Enables Neural Networks to Plan

Link to article

Paper accepted at IEEE Transaction on Neural Networks and Learning Systems

December 2023: Thomas Limbacher, Ozan Özdenizci, Robert Legenstein
Memory-enriched Computation and Learning in Spiking Neural Networks through Hebbian Plasticity

arXiv link to accepted paper.

Paper accepted for NeurIPS 2023

October 2023: Lorenzo Loconte, Nicola Di Mauro, Robert Peharz, Antonio Vergari
How to Turn Your Knowledge Graph Embeddings into Generative Models

arXiv link to accepted paper.

Paper accepted for AAAI 2023

October 2023: Alvaro H.C. Correia, Gennaro Gala, Erik Quaeghebeur, Cassio de Campos,
Robert Peharz - Continuous Mixtures of Tractable Probabilistic Models

arXiv link to accepted paper.

EIC Pathfinder NEO project started

October 2023: EIC Pathfinder project started
NEO, Next Generation Molecular Data Storage based on DNA Origamis

link to EU project details


 

EIC Pathfinder VanillaFlow project started

September 2023: EIC Pathfinder project started:
VanillaFlow, Artificial Intelligence guided Development of Flow Battery Technology

link to EU project details

 

Lecture series on Probabilistic Circuits

July 2023: Robert Peharz and Antonio Vergari gave a lecture series on Probabilistic Circuits at the European Summer School on Aritificial Intelligence in Ljubljana, 24th - 28th July 2023.

You can watch the whole course here: link to video

 

Paper accepted for AISTATS 2023

February 2023: Yang Yang, Gennaro Gala, Robert Peharz
Bayesian Structure Scores for Probabilistic Circuits

arXiv link to accepted paper

 

NeurIPS 2022 - Tutorial and Workshop

Paper accepted for AAAI 2022

December 2022:  "Probabilistic models based on continuous latent spaces, such as variational autoencoders, can be understood as uncountable mixture models where components depend continuously on the latent code."

Alvaro H.C. Correia, Gennaro Gala, Erik Quaeghebeur, Cassio de Campos, Robert Peharz
Continuous Mixtures of Tractable Probabilistic Models

arXiv link to accepted paper

Paper accepted for NeurIPS 2022

November 2022:  "Causal models are powerful reasoning tools, but usually we don't know which model is the correct one. Traditionally, one first aims to find the correct causal model from data, which is then used for causal reasoning", states Robert Peharz.

Christian Toth, Lars Lorch, Christian Knoll, Andreas Krause, Franz Pernkopf, Robert Peharz, Julius Von Kügelgen - Active Bayesian Causal Inference

link to NeurIPS paper
more detailed tweeted Information

Kleine Zeitung Special Edition "Die Kraft der Region"

November 2022: "Machine learning will change the world like the internet and before that computers did", predicts Robert Legenstein.

learn more about this topic
Kleine Zeitung - about this Special Edition

Paper accepted at CVPR

March 2022: Stealthy bit-flip attacks are menacing deep neural network applications. Our novel defence make the attacker's life much harder. Developed in the Dependable-Systems-Lab of the SiliconAustriaLabs.

Ozan Özdenizci and Robert Legenstein
Improving robustness against stealthy weight bit-flip attacks by output code matching. In
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022. (Link to PDF)

See the featured interview at CVPR Daily.

Paper accepted at ICML

June 2021: Want to train sparsely connected neural network which is robust to adversarial attacks? Use our novel method developed in the Dependable-Systems-Lab of the SiliconAustriaLabs

Ozan Özdenizci and Robert Legenstein.
Training adversarially robust sparse networks via Bayesian connectivity sampling. In
Proceedings of the 38th International Conference on Machine Learning, Marina Meila and Tong Zhang, editors, volume 139 of
Proceedings of Machine Learning Research, pages 8314-8324. PMLR, 18-24 Jul 2021. (Link to PDF)

Congratulations Ozan.

New research article featured in the news

June 2021: How to control a robotic elephant trunk with a Spiking Neural Network

Manuel Traub, Robert Legenstein, and Sebastian Otte.
Many-joint robot arm control with recurrent spiking neural networks. In
2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021. accepted for publication. (Link to ArXiv PDF)

See the news features:

Article featured in Nature Communication Editor's Highlights

Our article on biologically plausible backprop through time " is featured in Nature Communication Editor's Highlights:

https://www.nature.com/collections/ceiajcdbeb

FET-Open project ADOPD started in October

Oct 2020: The EU FET-Open project ADOPD (Adaptive Optical Dendrites) has started. See Projects.

In media (German):
https://www.derstandard.at/story/2000120886799/neue-zukunftstechnologien-tu-graz-startet-drei-visionaere-projekte

Paper accepted for NeurIPS 2020

Thomas Limbacher and Robert Legenstein.
H-mem: Harnessing synaptic plasticity with Hebbian memory networks

accpted for NeurIPS 2020
bioRxiv, 2020. (Link to PDF)

New article in Nature Communications

Aug 2020: The IGI team has published a new article in Nature Communications: A solution to the learning dilemma for recurrent networks of spiking neurons

read more...

New reserach from the institute

June 2020: How do neurons in the brain interact to create the mind? IGI researchers tackled this question in a series of papers in PNAS, CerebralCortex, eNeuro. Read more...
 

News & Views Article on our research

May 2020: Nature Machine Intelligence published this News & Views article on our research: An alternative to backpropagation through time.

SMALL Project has started

Mar 2020: The EU FET-Proactive project SMALL (Spiking Memristive Architectures for Learning to Learn) has started. See Projects and the Webpage.

Wolfgang Maass on Braininspired.co - Podcast

Feb 2020: Don't miss these interviews with Wolfgang Maass on computational neuroscience, brain-inspired computation, and more at

Cooperation between TU Graz and Silicon Austria Labs has been started

Jan. 2020: On 10th of January, a cooperation between TU Graz and the Silicon Austria Labs has been started with a press conference and official signatures under a contract that establishes two joint research labs. The goal of the cooperation is to foster basic research for future microelectronics. The Institute of Theoretical Computer Science is part of the Depenendable Embedded Systems Lab, where research will focus on more reliable AI systems. Press release

New paper

Jan. 2019: Our new paper discusses possibilities how powerful learning algorithms could be implemented in biological neuronal networks. Bellec, Scherr, et al., Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets. ArXiv 2019.

Dec. 2018: The IGI cooperates with intel on the design of spiking neural networks for their neuromorphic chip Loihi. Article at top500...

ELLIS Society launched

Dec. 2018: Prof. Robert Legenstein and Prof. Wolfgang Maass at the founding ceremony of the ELLIS society. ELLIS is an initiative of European scientists to establish a European Lab for Learning & Intelligent Systems (ELLIS open letter). more ...