The goal of the Cognitive Neuroimaging Group was to better understand the neural systems that allow us to acquire, represent and retrieve knowledge about our multi-sensory environment.
We address this central research question from three complementary perspectives: (1) Multi-sensory Integration, (2) Language and (3) Concept Learning. To characterize the underlying neural and computational mechanisms, we combine psychophysics, functional imaging (fMRI & EEG/MEG) and models of Bayesian inference and learning. Using measures of functional and effective connectivity (e.g. Dynamic Causal Modelling), we investigate whether these higher cognitive functions emerge from changes in interactions amongst brain areas.
To interact effectively with our environment, the human brain integrates information from multiple senses into a coherent and more reliable percept. Combining fMRI and EEG/MEG, we investigate where, when and how the human brain integrates different types of sensory information at multiple levels of the cortical hierarchy. For this, we primarily employ complex naturalistic stimuli such as audio-visual speech, actions or objects. Our research focused on the following questions:
- Where and how are different types of sensory features combined at multiple levels of the cortical hierarchy?
- What is the functional relevance of suppressive and superadditive modes of multi-sensory integration?
- How is the relative timing (e.g. synchrony) of multiple inputs coded across sensory modalities?
- How can attention modulate multi-sensory interactions?
- Which factors determine inter-trial and inter-subject variability in multi-sensory integration?
- How does multi-sensory integration dynamically adapt to statistical inputs?
The tri-partite architecture of language encompasses phonology, semantics and syntax. One of our general aims is to understand how the language system emerges from and influences non-verbal audio-visual processing. Semantics may form a natural interface between verbal and non-verbal processing. Semantic concepts are a prerequisite for thought and language. Object concepts can be characterized by various sensory features (e.g. auditory, visual) and referred to by an arbitrary linguistic label. In a series of studies, we have sought to identify the organizational principles of semantic memory. Together with other groups, our studies suggest that deep semantic processing may not only involve a fronto-temporal core semantic system, but also sensory-motor regions. For instance, processing of tool and action concepts has been shown to rely on re-activation of action representations in the visuo-motor system via top- down modulation.
Throughout life, humans are able to form, learn and adjust concepts in response to experience and task demands. The group has started to investigate how the human brain learns novel categories and their linguistic labels via structured input. Humans have the amazing ability to infer approximate extensions of concepts from only a few positive examples. These abilities of inductive generalization and categorization enable them to predict unobserved physical features or behaviour from a few observed data points. Currently, we are investigating, whether prior unsupervised training that allows subjects to estimate the underlying stimulus distributions can modulate the difficulty of consecutive supervised category learning.