This file was created by the Typo3 extension sevenpack version 0.7.14 --- Timezone: CEST Creation date: 2017-05-23 Creation time: 10-48-19 --- Number of references 22 article LeitaoTWPN2012 Effects of Parietal TMS on Visual and Auditory Processing at the Primary Cortical Level: A Concurrent TMS-fMRI Study Cerebral Cortex 2013 4 23 4 873-884 Accumulating evidence suggests that multisensory interactions emerge already at the primary cortical level. Specifically, auditory inputs were shown to suppress activations in visual cortices when presented alone but amplify the blood oxygen level–dependent (BOLD) responses to concurrent visual inputs (and vice versa). This concurrent transcranial magnetic stimulation–functional magnetic resonance imaging (TMS-fMRI) study applied repetitive TMS trains at no, low, and high intensity over right intraparietal sulcus (IPS) and vertex to investigate top-down influences on visual and auditory cortices under 3 sensory contexts: visual, auditory, and no stimulation. IPS-TMS increased activations in auditory cortices irrespective of sensory context as a result of direct and nonspecific auditory TMS side effects. In contrast, IPS-TMS modulated activations in the visual cortex in a state-dependent fashion: it deactivated the visual cortex under no and auditory stimulation but amplified the BOLD response to visual stimulation. However, only the response amplification to visual stimulation was selective for IPS-TMS, while the deactivations observed for IPS- and Vertex-TMS resulted from crossmodal deactivations induced by auditory activity to TMS sounds. TMS to IPS may increase the responses in visual (or auditory) cortices to visual (or auditory) stimulation via a gain control mechanism or crossmodal interactions. Collectively, our results demonstrate that understanding TMS effects on (uni)sensory processing requires a multisensory perspective. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Research Group Noppeney Department Scheffler http://cercor.oxfordjournals.org/content/23/4/873.full.pdf+html 10.1093/cercor/bhs078 joanaleitaoJLeitão thielscherAThielscher sebastianwernerSWerner rolfRPohmann unoppeUNoppeney article 6741 The Contributions of Transient and Sustained Response Codes to Audiovisual Integration Cerebral Cortex 2011 4 21 4 920-931 Multisensory events in our natural environment unfold at multiple temporal scales over extended periods of time. This functional magnetic resonance imaging study investigated whether the brain uses transient (onset, offset) or sustained temporal codes to effectively integrate incoming visual and auditory signals within the cortical hierarchy. Subjects were presented with 1) velocity-modulated radial motion, 2) amplitude-modulated sound, or 3) an in phase combination of both in blocks of variable durations to dissociate transient and sustained blood oxygen level–dependent responses. Audiovisual interactions emerged primarily for transient onset and offset responses highlighting the importance of rapid stimulus transitions for multisensory integration. Strikingly, audiovisual interactions for onset and offset transients were dissociable at the functional and anatomical level. Low-level sensory areas integrated audiovisual inputs at stimulus onset in a superadditive fashion to enhance stimulus salience. In contrast, higher order association areas showed subadditive integration profiles at stimulus offset possibly reflecting the formation of higher order representations. In conclusion, multisensory integration emerges at multiple levels of the cortical hierarchy using different temporal codes and integration profiles. From a methodological perspective, these results highlight the limitations of conventional event related or block designs that cannot characterize these rich dynamics of audiovisual integration. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Research Group Noppeney http://cercor.oxfordjournals.org/content/21/4/920.full.pdf+html Biologische Kybernetik Max-Planck-Gesellschaft en 10.1093/cercor/bhq161 sebastianwernerSWerner unoppeUNoppeney article 6117 Superadditive Responses in Superior Temporal Sulcus Predict Audiovisual Benefits in Object Categorization Cerebral Cortex 2010 8 20 8 1829-1842 Merging information from multiple senses provides a more reliable percept of our environment. Yet, little is known about where and how various sensory features are combined within the cortical hierarchy. Combining functional magnetic resonance imaging and psychophysics, we investigated the neural mechanisms underlying integration of audiovisual object features. Subjects categorized or passively perceived audiovisual object stimuli with the informativeness (i.e., degradation) of the auditory and visual modalities being manipulated factorially. Controlling for low-level integration processes, we show higher level audiovisual integration selectively in the superior temporal sulci (STS) bilaterally. The multisensory interactions were primarily subadditive and even suppressive for intact stimuli but turned into additive effects for degraded stimuli. Consistent with the inverse effectiveness principle, auditory and visual informativeness determine the profile of audiovisual integration in STS similarly to the influ ence of physical stimulus intensity in the superior colliculus. Importantly, when holding stimulus degradation constant, subjects’ audiovisual behavioral benefit predicts their multisensory integration profile in STS: only subjects that benefit from multisensory integration exhibit superadditive interactions, while those that do not benefit show suppressive interactions. In conclusion, superadditive and subadditive integration profiles in STS are functionally relevant and related to behavioral indices of multisensory integration with superadditive interactions mediating successful audiovisual object categorization. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/Werner2009_SuperadditiveResponsesInSTSpredictAudiovisualBenefits_6117[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Research Group Noppeney http://cercor.oxfordjournals.org/cgi/reprint/bhp248v1 Biologische Kybernetik Max-Planck-Gesellschaft en 10.1093/cercor/bhp248 sebastianwernerSWerner unoppeUNoppeney article 6593 Perceptual Decisions Formed by Accumulation of Audiovisual Evidence in Prefrontal Cortex Journal of Neuroscience 2010 5 30 21 7434-7446 To form perceptual decisions in our multisensory environment, the brain needs to integrate sensory information derived from a common source and segregate information emanating from different sources. Combining fMRI and psychophysics in humans, we investigated how the brain accumulates sensory evidence about a visual source in the context of congruent or conflicting auditory information. In a visual selective attention paradigm, subjects (12 females, 7 males) categorized video clips while ignoring concurrent congruent or incongruent soundtracks. Visual and auditory information were reliable or unreliable. Our behavioral data accorded with accumulator models of perceptual decision making, where sensory information is integrated over time until a criterion amount of information is obtained. Behaviorally, subjects exhibited audiovisual incongruency effects that increased with the variance of the visual and the reliability of the interfering auditory input. At the neural level, only the left inferior frontal sulcus (IFS) showed an "audiovisual-accumulator" profile consistent with the observed reaction time pattern. By contrast, responses in the right fusiform were amplified by incongruent auditory input regardless of sensory reliability. Dynamic causal modeling showed that these incongruency effects were mediated via connections from auditory cortex. Further, while the fusiform interacted with IFS in an excitatory recurrent loop that was strengthened for unreliable task-relevant visual input, the IFS did not amplify and even inhibited superior temporal activations for unreliable auditory input. To form decisions that guide behavioral responses, the IFS may accumulate audiovisual evidence by dynamically weighting its connectivity to auditory and visual regions according to sensory reliability and decisional relevance. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Research Group Noppeney http://www.jneurosci.org/cgi/reprint/30/21/7434 Biologische Kybernetik Max-Planck-Gesellschaft en 10.1523/JNEUROSCI.0455-10.2010 unoppeUNoppeney ostwalddDOstwald sebastianwernerSWerner article 6238 Distinct Functional Contributions of Primary Sensory and Association Areas to Audiovisual Integration in Object Categorization Journal of Neuroscience 2010 2 30 7 2662-2675 Multisensory interactions have been demonstrated in a distributed neural system encompassing primary sensory and higher-order association areas. However, their distinct functional roles in multisensory integration remain unclear. This fMRI study dissociated the functional contributions of three cortical levels to multisensory integration in object categorization. Subjects actively categorized or passively perceived noisy auditory and visual signals emanating from everyday actions with objects. The experiment included two 2x2 factorial designs that manipulated either (i) the presence/absence or (ii) the informativeness of the sensory inputs. These experimental manipulations revealed three patterns of audiovisual interactions. (1) In primary auditory cortices (PAC), a concurrent visual input increased the stimulus salience by amplifying the auditory response irrespective of task-context. Effective connectivity analyses demonstrated that this automatic response amplification is mediated via both, direct and indi rect (via STS) connectivity to visual cortices. (2) In superior temporal (STS) and intraparietal (IPS) sulci, audiovisual interactions sustained the integration of higher-order object features and predicted subjects’ audiovisual benefits in object categorization. (3) In the left ventrolateral prefrontal cortex (vlPFC), explicit semantic categorization resulted in suppressive audiovisual interactions as an index for multisensory facilitation of semantic retrieval and response selection. In conclusion, multisensory integration emerges at multiple processing stages within the cortical hierarchy. The distinct profiles of audiovisual interactions dissociate audiovisual salience effects in PAC, formation of object representations in STS/IPS and audiovisual facilitation of semantic categorization in vlPFC. Furthermore, in STS/IPS, the profiles of audiovisual interactions were behaviorally relevant and predicted subjects’ multisensory benefits in performance accuracy. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/Werner2010_JoNsc_6238[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Research Group Noppeney http://www.jneurosci.org/cgi/reprint/30/7/2662 Biologische Kybernetik Max-Planck-Gesellschaft en 10.1523/JNEUROSCI.5091-09.2010 sebastianwernerSWerner unoppeUNoppeney poster 6662 Effective Connectivity in Multisensory integration: Insights from functional imaging in humans 2010 6 11 126 Multisensory interactions emerge in a distributed neural system encompassing primary sensory and higher-order association areas. Multiple functional brain architectures have been proposed to mediate multisensory interactions in low-level auditory regions including feedforward thalamocortical, direct connections between sensory areas and feedback from higher-order association areas such as IPS or STS. We will review the potential and limitations of combining functional imaging and effective connectivity analyses for characterizing functional architectures of multisensory integration. In a series of three audiovisual integration studies, we combined dynamic causal modeling and Bayesian Model comparison to arbitrate between neural models where crossmodal effects are mediated via &lsquo;direct&lsquo; V1-A1 connectivity, &lsquo;indirect&lsquo; feedback connectivity from STS or both mechanisms. The first study manipulated the presence/absence of auditory and visual inputs and demonstrated that low level audiovisual salience effects are mediated via both direct and indirect mechanisms of audiovisual integration. The second study showed that audiovisual synchrony effects in low-level sensory areas are mediated primarily via direct connectivity. The third study demonstrated that semantic audiovisual (in)congruency effects in higher order visual object areas are elicited by direct influences from auditory areas rather than top-down effects from prefrontal cortices. We conclude by critically reviewing interpretational ambiguities and pitfalls of Dynamic Causal Modelling results based on fMRI data in humans. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Research Group Noppeney http://imrf.mcmaster.ca/IMRF/ocs2/index.php/imrf/2010/paper/view/126 Biologische Kybernetik Max-Planck-Gesellschaft Liverpool, UK 11th International Multisensory Research Forum (IMRF 2010) en unoppeUNoppeney sebastianwernerSWerner ostwalddDOstwald rklewisRLewis poster 7076 Investigating the effect of IPS TMS-stimulation on auditory and visual processing: A TMS-fMRI Study 2010 6 16 10 MT-PM 109 http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department MRZ Research Group Noppeney http://www.humanbrainmapping.org/i4a/pages/index.cfm?pageid=1 Biologische Kybernetik Max-Planck-Gesellschaft Barcelona, Spain 16th Annual Meeting of the Organisation for Human Brain Mapping (HBM 2010) en joanaleitaoJLeitão thielscherAThielscher sebastianwernerSWerner rolfRPohmann unoppeUNoppeney poster 5934 The contributions of transient and sustained responses to audio visual integration of dynamic information 2009 7 10 754 251-252 Transient and sustained responses have been shown to play distinct functional roles in auditory processing: Transient responses may subserve rapid stimulus detection, sustained responses contribute to a more detailed sound characterization. While numerous fMRI studies have reported audiovisual interactions at multiple levels of the cortical hierarchy, they were not able to dissociate transient and sustained responses. This fMRI study optimized the design to disentangle the contributions of sustained, onset and offset responses to superadditive and subadditive interactions and localize the effects within the visual and auditory processing hierarchies. Seventeen subjects participated in this fMRI study (Siemens TimTrio 3T scanner, GE-EPI, TE = 40 ms, 38 axial slices, TR = 3.08 s). While engaged in a target detection task, they were presented with 1 s, 10 s, 20 s, 30 s blocks of (i) video clips of an expanding radial star-field, (ii) auditory pink noise or (iii) both. The velocity of the star-field and the sound amplitude were jointly modulated according to 0.1 Hz sine-wave function. The regressors of the general linear model were formed by convolving (i) delta functions encoding the onset and offset of each block and (ii) box car functions adjusted for block length with the hemodynamic response functions. Blocks of 1 s duration were modeled only as onsets. In addition, the model included targets and parametric modulators encoding the amplitude / velocity modulation. To allow for a random-effects analysis (SPM5), contrast images for each subject were entered into second level one sample t-tests. We tested for superadditive and subadditive interactions separately for onset, offset and sustained block responses. Results are reported at p<0.05 whole brain corrected. Significant audiovisual interactions were observed only for the transients: For the onsets, the interactions were superadditive in the fusiform gyrus (FFG), anterior calcarine sulcus (aCaS) and the cuneus (Cun) and subadditive in the posterior superior temporal gyrus/sulcus (pSTS/STG) and the precuneus (PrCun). For the offsets, the interactions were subadditive in the pSTS/STG region and the anterior intraparietal sulcus (aIPS). The regional response profiles were further characterized by their general responsiveness to visual, auditory and audiovisual onsets, offsets and sustained stimulation. This dissociated three activation profiles: (i) In FFG, only the onsets elicited a strong positive response with moderate responses to offsets and sustained stimulation. Further, the onset responses were positive for visual and audiovisual stimuli and negative for auditory stimuli. (ii) In aCaS, only the offsets elicited a positive response for all sensory modalities. (iii) In the remaining regions, both onsets and offsets elicited a positive response for all sensory modalities. In conclusion, audiovisual interactions are observed primarily for transient rather than sustained stimulation. Furthermore, these AV interactions are located in regions that respond primarily to transients. In contrast, no significant interactions were observed in regions that exhibited sustained responses to extended blocks of audiovisual stimulation. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/PosterIMRF2009_FINAL_[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Research Group Noppeney http://imrf.mcmaster.ca/IMRF/ocs/index.php/meetings/2009/paper/view/754 Biologische Kybernetik Max-Planck-Gesellschaft New York, NY, USA 10th International Multisensory Research Forum (IMRF 2009) en sebastianwernerSWerner unoppeUNoppeney poster 5263 Audio-visual object integration in human STS: Determinants of stimulus efficacy and inverse effectiveness 2008 7 9 275 Combining fMRI and psychophysics, we investigated the neural mechanisms underlying the integration of higher-order audio-visual object features. In a target detection and a semantic categorization task, we presented subjects with pictures and sounds of tools or musical instruments while factorially manipulating the relative informativeness (degradation) of auditory and visual stimuli. Controlling for integration effects of low-level stimulus features, our experiment reveals integration of higher-order audio-visual object information selectively in anterior and posterior STS regions. Across subjects, audio-visual BOLD-interactions within these regions were strongly subadditive for intact stimuli and turned into additive effects for degraded stimuli. Across voxels, the probability to observe subadditivity increased with the strength of the unimodal BOLD-responses for both degraded and intact stimuli. Importantly, subjects’ multi-sensory behavioural benefit significantly predicted the mode of integration in STS : Subjects with greater benefits exhibited stronger superadditivity. In conclusion and according to the inverse effectiveness principle that is determined by stimulus efficacy, we demonstrate that the mode of multi-sensory integration in STS depends on stimulus informativeness, the voxel-specific responsiveness to unimodal stimulus components and the subject-specific multi-sensory behavioural benefit in object perception. The relationship between BOLD-responses and behavioural indices show the functional relevance of super- and subadditive modes of multi-sensory integration. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/IMRF2008_Swerner_[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Research Group Noppeney http://imrf.mcmaster.ca/IMRF/2008/pdf/FullProgramIMRF08.pdf Biologische Kybernetik Max-Planck-Gesellschaft Hamburg, Germany 9th International Multisensory Research Forum (IMRF 2008) en sebastianwernerSWerner unoppeUNoppeney poster 5267 Physical and perceptual factors that determine the mode of audio-visual integration in distinct areas of the speech processing system 2008 7 9 208 133 Speech and non-speech stimuli differ in their (i) physical (spectro-temporal structure) and (ii) perceptual (phonetic/linguistic representation) aspects. To dissociate these two levels in audio-visual integration, this fMRI study employed original spoken sentences and their sinewave analogues that were either trained and perceived as speech (group 1) or non-speech (group 2). In both groups, all stimuli were presented in visual, auditory or audiovisual modalities. AV-integration areas were identified by superadditive and subadditive interactions in a random effects analysis. While no superadditive interactions were observed, subadditive effects were found in right superior temporal sulci for both speech and sinewave stimuli. The left ventral premotor cortex showed increased subadditive interactions for speech relative to their sinewave analogues irrespective of whether they were perceived as speech or non-speech. More specifically, only familiar auditory speech signal suppressed premotor activation that was elicited by passive lipreading in the visual conditions, suggesting that acoustic rather than perceptual/linguistic features determine AV-integration in the mirror neuron system. In contrast, AV-integration modes differed between sinewave analogues perceived as speech and non-speech in bilateral anterior STS areas that have previously been implicated in speech comprehension. In conclusion, physical and perceptual factors determine the mode of AV-integration in distinct speech processing areas. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Research Group Noppeney http://imrf.mcmaster.ca/IMRF/2008/pdf/FullProgramIMRF08.pdf Biologische Kybernetik Max-Planck-Gesellschaft Hamburg, Germany 9th International Multisensory Research Forum (IMRF 2008) en hweelingHLLee jotueJTuennerhoff sebastianwernerSWerner pammiCPammi unoppeUNoppeney poster 5265 The prefrontal cortex accumulates object evidence through differential connectivity to the visual and auditory cortices NeuroImage 2008 6 41 Supplement 1 S150 http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Research Group Noppeney http://www.sciencedirect.com/science/article/pii/S1053811908003133 Biologische Kybernetik Max-Planck-Gesellschaft Melbourne, Australia 14th Annual Meeting of the Organization for Human Brain Mapping (HBM 2008) en 10.1016/j.neuroimage.2008.04.008 unoppeUNoppeney ostwalddDOstwald kleinermMKleiner sebastianwernerSWerner poster 4566 Accumulation of object evidence from multiple senses NeuroImage 2007 6 36 Supplement 1 S109 http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/Poster_HBM2007_noppeney_[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Research Group Noppeney http://www.sciencedirect.com/science/article/pii/S1053811907002789 Biologische Kybernetik Max-Planck-Gesellschaft Chicago, IL, USA 13th Annual Meeting of the Organization for Human Brain Mapping (HBM 2007) en 10.1016/j.neuroimage.2007.03.045 unoppeUNoppeney ostwalddDOstwald kleinermMKleiner sebastianwernerSWerner poster 4565 Multi-sensory interactions in perceptual and response selection processes NeuroImage 2007 6 36 Supplement 1 S120 http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/Poster_HBM2007_werner_[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Research Group Noppeney http://www.sciencedirect.com/science/article/pii/S1053811907002789 Biologische Kybernetik Max-Planck-Gesellschaft Chicago, IL, USA 13th Annual Meeting of the Organization for Human Brain Mapping (HBM 2007) en 10.1016/j.neuroimage.2007.03.045 sebastianwernerSWerner unoppeUNoppeney poster 4342 Audio-visual integration during multisensory object categorization 2006 6 7 26 Tools or musical instruments are characterized by their form and sound. We investigated audio-visual integration during semantic categorization by presenting pictures and sounds of objects separately or together and manipulating the degree of information content. The 3 x 6 factorial design manipulated (1) auditory information (sound, noise, silence) and (2) visual information (6 levels of image degradation). The visual information was degraded by manipulating the amount of phase scrambling of the image (0%, 20%, 40%, 60%, 80%, 100%). Subjects categorized stimuli as musical instruments or tools. In terms of accuracy and reaction times (RT), we found significant main effects of (1) visual and (2) auditory information and (3) an interaction between the two factors. The interaction was primarily due to an increased facilitatory effect of sound for the 80% degradation level. Consistently across the first 5 levels of visual degradation, we observed RT improvements for the sound-visual relative to the noise- or sile nce-visual conditions. Corresponding RT distributions significantly violated the so-called race model inequality across the first 5 percentiles of their cumulative density functions (even when controlling for low-level audio-visual interactions). These results suggest that redundant structural and semantic information is not independently processed but integrated during semantic categorization. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Research Group Noppeney http://imrf.mcmaster.ca/IMRF/2006/viewabstract.php?id=124 Biologische Kybernetik Max-Planck-Gesellschaft Dublin, Ireland 7th International Multisensory Research Forum (IMRF 2006) en sebastianwernerSWerner unoppeUNoppeney poster 4341 Early Stages (P100) of Face Perception in Humans as Measured with Event-Related Potentials (ERPs) 2004 2 7 135 According to current ERP literature, face specific activity is reflected by a negative component over the inferior occipito-temporal cortex between 140 and 180 ms after stimulus onset (N170). A recently published study [1] using magnetoencephalography (MEG) clearly indicated that a face-selective component can be observed at 100 ms (M100) which is about 70 ms earlier than reported in previous studies. Here we report these early differences at 107 ms between the ERPs of faces and buildings over the occipito-temporal cortex using electroencephalography. To exclude that these effects were caused by low-level features of the pictures, like contrast or luminance, we compared the P100 component for faces and totally scrambled faces in a second study. The result of higher P100 amplitudes for intact faces compared to the scramble faces confirm that face processing starts already at ~ 100 ms with an initial stage which can be measured not only with MEG but also with ERPs. [1] Liu, J. et al. (2002): Nat Neurosci. 5, 910-916. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/TWK2004_[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.twk.tuebingen.mpg.de/twk04/index.php Biologische Kybernetik Max-Planck-Gesellschaft Tübingen, Germany 7th Tübingen Perception Conference (TWK 2004) MJHerrmann A-CEhlis sebastianwernerSWerner HEllgring AJFallgatter thesis 6807 The Neural Correlates and Mechanisms Mediating the Integration of Auditory and Visual Information in the Human Brain 2010 2 To respond more quickly to events in natural environments the human brain merges information from multiple senses into a more reliable percept. Multisensory integration processes have been demonstrated in a distributed neural system encompassing sensory-specific, higher association and prefrontal cortices. Using fMRI and psychophysical methods this dissertation investigates the functional similarities, differences and constraints that govern the integration of auditory and visual information in different regions of the human cerebral cortex. Characterizing their temporal response codes, effective connectivity patterns and underlying computations for combining multisensory inputs, this work provides evidence for the integration of specific types of information at 3 functionally specialized processing stages. At the first stage, multisensory interactions in sensory-specific regions indicate the same sensory source by integrating spatiotemporally aligned auditory and visual inputs to enhance stimulus detection. At the second stage, multisensory interactions in higher association regions integrate complex environmental features into higher order representations, forming a unified percept and mediating multisensory benefits in object recognition. At the third stage, multisensory interactions in the prefrontal cortex mediate response selection processes based on perceptual information from auditory and visual modalities with multisensory facilitations of reaction time. This dissertation constitutes the first systematic attempt to dissociate the contributions of sensory-specific, higher association and prefrontal areas to audiovisual integration in the human brain. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/SW_ThesisSynopsis_[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Research Group Noppeney Biologische Kybernetik Max-Planck-Gesellschaft Eberhard-Karls-Universität, Tübingen, Germany PhD en sebastianwernerSWerner thesis 4344 Allocentric spatial judgements by re-mapping egocentric coordinates: a fMRI study. 2005 10 21 Spatial locations of objects can be represented in the brain with respect to different classes of reference frames, either relative to or independent of the subject’s position. This fMRI study compared brain activation induced by a condition involving spatial judgements with respect to the body mid-sagittal plane (the egocentric task) with that induced by judgements about the spatial relationship between certain objects (allocentric task). Comparing both conditions to an object discrimination task (control condition) revealed a largely overlapping occipito-parietal network in the right hemisphere. Direct comparisons of the two spatial tasks revealed higher activations for the allocentric spatial task in medial temporal lobe (MTL) structures of the right hemisphere and in frontal areas of the brain, including the anterior cingulate gyrus (ACC) and medial parts of the prefrontal cortex (MPFC). No brain region exhibited a significant higher activation in the egocentric compared to the allocentric task. Results are interpreted according to a transition approach, in that reflexively performed egocentric localisations are re-mapped into an allocentric code of permanent storage. Here we suggest the involvement of the posterior parietal cortex in managing egocentric spatial representations, with some parts being specialised to convey egocentric information to brain areas responsible for the re-mapping. We propose that two densely interconnected structures (MPFC and MTL) could serve this function, initialised by conflict monitoring of the ACC. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/ThesisSW_[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Biologische Kybernetik Max-Planck-Gesellschaft Eberhard-Karls-Universität Tübingen Diplom sebastianwernerSWerner conference NoppeneyASMLWOLC2012 Different classes of audiovisual correspondences are processed at distinct levels of the cortical hierarchy Seeing and Perceiving 2012 6 20 25 69 The brain should integrate sensory inputs only when they emanate from a common source and segregate those from different sources. Sensory correspondences are important cues informing the brain whether two sensory inputs are generated by a common event and should hence be integrated. Most prominently, sensory inputs should co-occur in time and space. More complex audiovisual stimuli may also be congruent in terms of semantics (e.g., objects and source sounds) or phonology (e.g., spoken and written words; linked via common linguistic labels). Surprisingly, metaphoric relations (e.g., pitch and height) have also been shown to influence audiovisual integration. The neural mechanisms that mediate these metaphoric congruency effects are only poorly understood. They may be mediated via (i) natural multisensory binding, (ii) common linguistic labels or (iii) semantics. In this talk, we will present a series of studies that investigate whether these different types of audiovisual correspondences are processed by distinct neural systems. Further, we investigate how those systems are employed by metaphoric audiovisual correspondences. Our results demonstrate that different classes of audiovisual correspondences influence multisensory integration at distinct levels of the cortical hierarchy. Spatiotemporal incongruency is detected already at the primary cortical level. Natural (e.g., motion direction) and phonological incongruency influences MSI in areas involved in motion or phonological processing. Critically, metaphoric interactions emerge in neural systems that are shared with natural and semantic incongruency. This activation pattern may reflect the ambivalent nature of metaphoric audiovisual interactions relying on both natural and semantic correspondences. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Research Group Noppeney Department Bülthoff Abstract Talk http://booksandjournals.brillonline.com/content/10.1163/187847612x646901 Oxford, UK 13th International Multisensory Research Forum (IMRF 2012) 10.1163/187847612X646901 unoppeUNoppeney ruthiRAdam ssadaghiSSadaghiani jmaierJXMaier hweelingHLLee sebastianwernerSWerner ostwalddDOstwald rklewisRLewis conradVConrad conference 5951 Inverse effectiveness in BOLD-response and its behavioural relevance in object categorization 2009 7 10 395 Inverse effectiveness has been invoked as a principle to describe synergistic effects of multisensory integration in neuronal and behavioural responses as a function of stimulus properties (e.g. intensity) or efficacy. We characterized ‘inverse effectiveness’ and its behavioural relevance at the macroscopic level, as provided by the fMRI BOLD-response, based on (1) stimulus-induced and (2) intrinsic response variability across voxels or subjects during object categorization. Subjects categorized audiovisual object stimuli with the relative informativeness (i.e. degradation) of the auditory and visual inputs being manipulated factorially. Controlling for low-level integration processes, higher-level audiovisual integration was observed selectively in the superior temporal sulci (STS) bilaterally. (1) Consistent with the law of inverse effectiveness, auditory and visual informativeness determined the operational modes of audiovisual integration in STS similarly to the influence of physical stimulus intensity in the superior colliculus: while multisensory interactions were primarily subadditive and even suppressive for intact stimuli, additive effects were observed for degraded, near threshold stimuli. (2) Exploiting intrinsic variability across voxels and/or subjects, we demonstrate that superadditivity for audiovisual stimuli increases with decreasing unimodal responses. This inverse relationship could be explained by inherent statistical dependencies between superadditive and unimodal responses. Nevertheless, the superadditive responses in STS (and only in this region) were related to subjects’ audiovisual behavioral benefit: only subjects that benefited from multisensory integration exhibited superadditive interactions, while those that did not benefit showed suppressive interactions. In conclusion, the (super)additive and subadditive integration modes in STS are functionally relevant and related to behavioral indices of multisensory integration with superadditive interactions mediating successful audiovisual object categorization. We argue that inverse effectiveness trends in neuronal and behavioural responses may be intimately related and mutually predictive. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Research Group Noppeney Abstract Talk http://imrf.mcmaster.ca/IMRF/ocs/index.php/meetings/2009/paper/view/897 Biologische Kybernetik Max-Planck-Gesellschaft New York, NY, USA 10th International Multisensory Research Forum (IMRF 2009) en unoppeUNoppeney sebastianwernerSWerner conference 5264 The prefrontal cortex accumulates object evidence through differential connectivity to the visual and auditory cortices 2008 7 9 189 118 To form categorical decisions about objects in our environment, the human brain accumulates noisy sensory information over time till a decisional threshold is reached. Combining fMRI and Dynamic Causal Modelling (DCM), we investigated how the brain accumulates evidence from the auditory and visual senses through distinct interactions amongst brain regions. In a visual selective attention paradigm, subjects categorized visual action movies while ignoring their accompanying soundtracks that were semantically congruent or incongruent. Both, auditory and visual information could be intact or degraded. Reaction times as a marker for the time to decisional threshold accorded with random walk models of decision making. At the neural level, incongruent auditory sounds induced amplification of the task-relevant visual information in the occipito-temporal cortex. Importantly, only the left inferior frontal sulcus (IFS) showed an activation pattern of an accumulator region i.e. (i) positive reactiontime and (ii) incongruency effects that were increased for unreliable (=degraded) visual and interfering reliable (=intact) auditory information, which -based on our DCM analysis- were mediated by increased forward connectivity from visual regions. Thus, to form interpretations and decisions that guide behavioural responses, the IFS may accumulate multi-sensory evidence over time through dynamic weighting of its connectivity to auditory and visual regions. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Research Group Noppeney Abstract Talk http://imrf.mcmaster.ca/IMRF/2008/pdf/FullProgramIMRF08.pdf Biologische Kybernetik Max-Planck-Gesellschaft Max Planck Institute for Biological Cybernetics Hamburg, Germany 9th International Multisensory Research Forum (IMRF 2008) en unoppeUNoppeney ostwalddDOstwald sebastianwernerSWerner kleinermMKleiner conference 5079 Audio-visual interactions in perception and response selection 2008 3 4 50 69 Both physical and physiological transmission times can differ between audition and vision. Under certain conditions, the brain reduces perceived asynchrony by adapting to this temporal discrepancy. In two experiments we investigated whether this recalibration is specific to auditory and visual stimuli, or whether other modality combinations (audiotactile, visuotactile) are affected, as well. We presented asynchronous audiovisual signals, with either auditory leading or visual leading. Then, using temporal order judgments we measured observers point of subjective simultaneity for three modality combinations. Results indicate an adjustment of perceived simultaneity for the audiovisual and the visuotactile modality pairs. We conclude that audiovisual adaptation is the result of a change of processing latencies of visual events. In a second experiment, we corroborate this finding. We demonstrate that reaction times to visual signals, but not to tactile or auditory signals, change as a result of audiovisual recalibration. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Research Group Noppeney Abstract Talk https://www.teap.de/memory/Abstractband_50_2008_marburg.pdf Biologische Kybernetik Max-Planck-Gesellschaft Marburg, Germany 50. Tagung Experimentell Arbeitender Psychologen (TeaP 2008) sebastianwernerSWerner unoppeUNoppeney conference 4343 Higher-level audio-visual integration in human superior temporal sulcus 2006 11 7 3 Most objects and events can be detected by more than one sensory system. Thus, to form a coherent percept of the environment the brain has to combine information from different senses. Multisensory integration of visual and auditory information offers numerous benefits for the accuracy or completeness of the perception and seems to depend on neural substrates that can be found at various levels of the cortical processing hierarchy. The objective of the current human fMRI study was to identify and characterize brain regions mediating the integration of invariant higher-order audio-visual features that specify complex natural objects. We presented subjects with tools and musical instruments as pictures and sounds whilst manipulating their relative informativeness with respect to object recognition and controlling for their low-level features. Subjects were engaged in a categorization or target detection task. Across both tasks, we show higher-level audio-visual interactions in posterior, middle and anterior por tions of the superior temporal sulcus (STS), with those regions obeying the law of inverse effectiveness: integration effects were larger for less effective stimulus configurations. We demonstrate that these findings parallel indices of multisensory enhancement on the behavioral level. These results confirm the role of STS as a major convergence site of auditory and visual information and show that invariant higher-order audio-visual features are integrated within higher association areas of the brain. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Research Group Noppeney Abstract Talk Biologische Kybernetik Max-Planck-Gesellschaft Oberjoch, Germany 7th Conference of the Junior Neuroscientists of Tübingen (NeNa 2006) en sebastianwernerSWerner