Brain-inspired Multimodal Deep-Learning

Brain-inspired Multimodal Deep-Learning

  • Date: Jan 19, 2024
  • Time: 11:00 AM - 12:30 PM (Local Time Germany)
  • Speaker: Dr. Rufin VanRullen
  • Centre de Recherche Cerveau et Cognition (CerCo), CNRS, Toulouse, France, https://rufinv.github.io/
  • Location: Max-Planck-Ring 8
  • Room: room 203 + zoom
Brain-inspired Multimodal Deep-Learning

Abstract: Artificial neural networks (ANNs) were designed with loose inspiration from biology. Large-scale ANNs (so-called "deep learning" systems) have recently taken the spotlight, reaching (or surpassing) human-level performance in numerous tasks. I will give two examples of how biological inspiration can continue to improve these systems. First, the inclusion of feedback loops based on "predictive coding" principles in deep convolutional networks can improve their performance and robustness for visual classification, and make them susceptible to perceptual illusions just like human observers. Second, multimodal systems following the "global workspace" architecture of the brain can display perceptual and semantic grounding abilities, while being trained with much less supervision. I will finish by proposing future extensions of these architectures that could allow them to flexibly adapt to diverse cognitive situations, e.g. "system-2 AI".

Bio: Following initial training in Maths and Computer Science, Rufin VanRullen obtained a PhD in Cognitive Science from Toulouse (France) under the supervision of S. Thorpe. After a postdoc with C. Koch at Caltech, he joined the CerCo (Toulouse, France) in 2001 as a CNRS Research Scientist, and became a CNRS Research Director in 2010. He was a visiting scientist at the Harvard Vision Lab from 2005 to 2007, with P. Cavanagh. He is an AI Research Chair at the Artificial and Natural Intelligence Toulouse Institute (ANITI) since 2019. He has published more than 130 scientific journal articles, including influential papers on neural coding, object recognition, feed-forward vs. feedback processes, and attention.

Go to Editor View