This file was created by the Typo3 extension sevenpack version 0.7.14 --- Timezone: CEST Creation date: 2017-05-23 Creation time: 22-41-36 --- Number of references 45 article GianiBOKN2015 Detecting tones in complex auditory scenes NeuroImage 2015 11 122 203–213 In everyday life, our auditory system is bombarded with many signals in complex auditory scenes. Limited processing capacities allow only a fraction of these signals to enter perceptual awareness. This magnetoencephalography (MEG) study used informational masking to identify the neural mechanisms that enable auditory awareness. On each trial, participants indicated whether they detected a pair of sequentially presented tones (i.e., the target) that were embedded within a multi-tone background. We analysed MEG activity for ‘hits’ and ‘misses’, separately for the first and second tones within a target pair. Comparing physically identical stimuli that were detected or missed provided insights into the neural processes underlying auditory awareness. While the first tone within a target elicited a stronger early P50m on hit trials, only the second tone evoked a negativity at 150 ms, which may index segregation of the tone pair from the multi-tone background. Notably, a later sustained deflection peaking around 300 and 500 ms (P300m) was the only component that was significantly amplified for both tones, when they were detected pointing towards its key role in perceptual awareness. Additional Dynamic Causal Modelling analyses indicated that the negativity at 150 ms underlying auditory stream segregation is mediated predominantly via changes in intrinsic connectivity within auditory cortices. By contrast, the later P300m response as a signature of perceptual awareness relies on interactions between parietal and auditory cortices. In conclusion, our results suggest that successful detection and hence auditory awareness of a two-tone pair within complex auditory scenes rely on recurrent processing between auditory and higher-order parietal cortices. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Research Group Noppeney http://www.sciencedirect.com/science/article/pii/S1053811915006084 10.1016/j.neuroimage.2015.07.001 gianiASGiani PBelardinelli EOrtiz kleinermMKleiner unoppeUNoppeney article RedcayDMKPTGS2013 Atypical brain activation patterns during a face-to-face joint attention game in adults with autism spectrum disorder Human Brain Mapping 2013 10 34 10 2511–2523 Joint attention behaviors include initiating one's own and responding to another's bid for joint attention to an object, person, or topic. Joint attention abilities in autism are pervasively atypical, correlate with development of language and social abilities, and discriminate children with autism from other developmental disorders. Despite the importance of these behaviors, the neural correlates of joint attention in individuals with autism remain unclear. This paucity of data is likely due to the inherent challenge of acquiring data during a real-time social interaction. We used a novel experimental set-up in which participants engaged with an experimenter in an interactive face-to-face joint attention game during fMRI data acquisition. Both initiating and responding to joint attention behaviors were examined as well as a solo attention (SA) control condition. Participants included adults with autism spectrum disorder (ASD) (n = 13), a mean age- and sex-matched neurotypical group (n = 14), and a separate group of neurotypical adults (n = 22). Significant differences were found between groups within social-cognitive brain regions, including dorsal medial prefrontal cortex (dMPFC) and right posterior superior temporal sulcus (pSTS), during the RJA as compared to SA conditions. Region-of-interest analyses revealed a lack of signal differentiation between joint attention and control conditions within left pSTS and dMPFC in individuals with ASD. Within the pSTS, this lack of differentiation was characterized by reduced activation during joint attention and relative hyper-activation during SA. These findings suggest a possible failure of developmental neural specialization within the STS and dMPFC to joint attention in ASD. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://onlinelibrary.wiley.com/doi/10.1002/hbm.22086/pdf 10.1002/hbm.22086 ERedcay DDodell-Feder PLMavros kleinermMKleiner MJPearrow CTriantafyllou JDGabrieli RSaxe article ConradKBHBN2013 Naturalistic Stimulus Structure Determines the Integration of Audiovisual Looming Signals in Binocular Rivalry PLoS ONE 2013 8 8 8 1-8 Rapid integration of biologically relevant information is crucial for the survival of an organism. Most prominently, humans should be biased to attend and respond to looming stimuli that signal approaching danger (e.g. predator) and hence require rapid action. This psychophysics study used binocular rivalry to investigate the perceptual advantage of looming (relative to receding) visual signals (i.e. looming bias) and how this bias can be influenced by concurrent auditory looming/receding stimuli and the statistical structure of the auditory and visual signals. Subjects were dichoptically presented with looming/receding visual stimuli that were paired with looming or receding sounds. The visual signals conformed to two different statistical structures: (1) a ‘simple’ random-dot kinematogram showing a starfield and (2) a “naturalistic” visual Shepard stimulus. Likewise, the looming/receding sound was (1) a simple amplitude- and frequency-modulated (AM-FM) tone or (2) a complex Shepard tone. Our results show that the perceptual looming bias (i.e. the increase in dominance times for looming versus receding percepts) is amplified by looming sounds, yet reduced and even converted into a receding bias by receding sounds. Moreover, the influence of looming/receding sounds on the visual looming bias depends on the statistical structure of both the visual and auditory signals. It is enhanced when audiovisual signals are Shepard stimuli. In conclusion, visual perception prioritizes processing of biologically significant looming stimuli especially when paired with looming auditory signals. Critically, these audiovisual interactions are amplified for statistically complex signals that are more naturalistic and known to engage neural processing at multiple levels of the cortical hierarchy. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Research Group Noppeney http://www.plosone.org/article/fetchObject.action;jsessionid=F70CA4F95C70436E9FD8AAB37C131527?uri=info%3Adoi%2F10.1371%2Fjournal.pone.0070710&representation=PDF 10.1371/journal.pone.0070710 e70710 conradVConrad kleinermMKleiner abartelsABartels jhartcherJHartcher O'Brien hhbHHBülthoff unoppeUNoppeney article RedcayKS2012 Look at this: the neural correlates of initiating and responding to bids for joint attention Frontiers in Human Neuroscience 2012 6 6 169 1-14 When engaging in joint attention, one person directs another person's attention to an object (Initiating Joint Attention, IJA), and the second person's attention follows (Responding to Joint Attention, RJA). As such, joint attention must occur within the context of a social interaction. This ability is critical to language and social development; yet the neural bases for this pivotal skill remain understudied. This paucity of research is likely due to the challenge in acquiring functional MRI data during a naturalistic, contingent social interaction. To examine the neural bases of both IJA and RJA we implemented a dual-video set-up that allowed for a face-to-face interaction between subject and experimenter via video during fMRI data collection. In each trial, participants either followed the experimenter's gaze to a target (RJA) or cued the experimenter to look at the target (IJA). A control condition, solo attention (SA), was included in which the subject shifted gaze to a target while the experimenter closed her eyes. Block and event-related analyses were conducted and revealed common and distinct regions for IJA and RJA. Distinct regions included the ventromedial prefrontal cortex for RJA and intraparietal sulcus and middle frontal gyrus for IJA (as compared to SA). Conjunction analyses revealed overlap in the dorsal medial prefrontal cortex (dMPFC) and right posterior superior temporal sulcus (pSTS) for IJA and RJA (as compared to SA) for the event analyses. Functional connectivity analyses during a resting baseline suggest joint attention processes recruit distinct but interacting networks, including social-cognitive, voluntary attention orienting, and visual networks. This novel experimental set-up allowed for the identification of the neural bases of joint attention during a real-time interaction and findings suggest that whether one is the initiator or responder, the dMPFC and right pSTS, are selectively recruited during periods of joint attention. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.frontiersin.org/Human_Neuroscience/10.3389/fnhum.2012.00169/abstract 10.3389/fnhum.2012.00169 ERedcay kleinermMKleiner RSaxe article GianiOBKPN2012 Steady-state responses in MEG demonstrate information integration within but not across the auditory and visual senses NeuroImage 2012 4 60 2 1478–1489 To form a unified percept of our environment, the human brain integrates information within and across the senses. This MEG study investigated interactions within and between sensory modalities using a frequency analysis of steady-state responses that are elicited time-locked to periodically modulated stimuli. Critically, in the frequency domain, interactions between sensory signals are indexed by crossmodulation terms (i.e. the sums and differences of the fundamental frequencies). The 3x2 factorial design, manipulated (1) modality: auditory, visual or audiovisual (2) steady-state modulation: the auditory and visual signals were modulated only in one sensory feature (e.g. visual gratings modulated in luminance at 6 Hz) or in two features (e.g. tones modulated in frequency at 40 Hz & amplitude at 0.2 Hz). This design enabled us to investigate crossmodulation frequencies that are elicited when two stimulus features are modulated concurrently (i) in one sensory modality or (ii) in auditory and visual modalities. In support of within-modality integration, we reliably identified crossmodulation frequencies when two stimulus features in one sensory modality were modulated at different frequencies. In contrast, no crossmodulation frequencies were identified when information needed to be combined from auditory and visual modalities. The absence of audiovisual crossmodulation frequencies suggests that the previously reported audiovisual interactions in primary sensory areas may mediate low level spatiotemporal coincidence detection that is prominent for stimulus transients but less relevant for sustained SSR responses. In conclusion, our results indicate that information in SSRs is integrated over multiple time scales within but not across sensory modalities at the primary cortical level. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Research Group Noppeney Department Bülthoff http://www.sciencedirect.com/science/article/pii/S1053811912001322 10.1016/j.neuroimage.2012.01.114 gianiASGiani EBOrtiz PBelardinelli kleinermMKleiner HPreissl unoppeUNoppeney article ChillerGlausSHKK2011 Recognition of emotion in moving and static composite faces Swiss Journal of Psychology 2011 12 70 4 233-240 This paper investigates whether the greater accuracy of emotion identification for dynamic versus static expressions, as noted in previous research, can be explained through heightened levels of either component or configural processing. Using a paradigm by Young, Hellawell, and Hay (1987), we tested recognition performance of aligned and misaligned composite faces with six basic emotions (happiness, fear, disgust, surprise, anger, sadness). Stimuli were created using 3D computer graphics and were shown as static peak expressions (static condition) and 7 s video sequences (dynamic condition). The results revealed that, overall, moving stimuli were better recognized than static faces, although no interaction between motion and other factors was found. For happiness, sadness, and surprise, misaligned composites were better recognized than aligned composites, suggesting that aligned composites fuse to form a single expression, while the two halves of misaligned composites are perceived as two separate emotions. For anger, disgust, and fear, this was not the case. These results indicate that emotions are perceived on the basis of both configural and component-based information, with specific activation patterns for separate emotions, and that motion has a quality of its own and does not increase configural or component-based recognition separately. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://psycnet.apa.org/journals/sjp/70/4/233.pdf 10.1024/1421-0185/a000061 SDChiller-Glaus aschwanASchwaninger FHofer kleinermMKleiner babsyBKnappmeyer article 6780 Audiovisual interactions in binocular rivalry Journal of Vision 2010 8 10 10:27 1-15 When the two eyes are presented with dissimilar images, human observers report alternating percepts—a phenomenon coined binocular rivalry. These perceptual fluctuations reflect competition between the two visual inputs both at monocular and binocular processing stages. Here we investigated the influence of auditory stimulation on the temporal dynamics of binocular rivalry. In three psychophysics experiments, we investigated whether sounds that provide directionally congruent, incongruent, or non-motion information modulate the dominance periods of rivaling visual motion percepts. Visual stimuli were dichoptically presented random-dot kinematograms (RDKs) at different levels of motion coherence. The results show that directional motion sounds rather than auditory input per se influenced the temporal dynamics of binocular rivalry. In all experiments, motion sounds prolonged the dominance periods of the directionally congruent visual motion percept. In contrast, motion sounds abbreviated the suppression periods of the directionally congruent visual motion percepts only when they competed with directionally incongruent percepts. Therefore, analogous to visual contextual effects, auditory motion interacted primarily with consciously perceived visual input rather than visual input suppressed from awareness. Our findings suggest that auditory modulation of perceptual dominance times might be established in a top-down fashion by means of feedback mechanisms. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Department Logothetis Research Group Noppeney http://www.journalofvision.org/content/10/10/27.full.pdf+html Biologische Kybernetik Max-Planck-Gesellschaft en 10.1167/10.10.27 conradVConrad abartelsABartels kleinermMKleiner unoppeUNoppeney article 6864 Live face-to-face interaction during fMRI: A new tool for social cognitive neuroscience NeuroImage 2010 5 50 4 1639-1647 Cooperative social interaction is critical for human social development and learning. Despite the importance of social interaction, previous neuroimaging studies lack two fundamental components of everyday face-to-face interactions: contingent responding and joint attention. In the current studies, functional MRI data were collected while participants interacted with a human experimenter face-to-face via live video feed as they engaged in simple cooperative games. In Experiment 1, participants engaged in a live interaction with the experimenter (“Live”) or watched a video of the same interaction (“Recorded”). During the “Live” interaction, as compared to the Recorded conditions, greater activation was seen in brain regions involved in social cognition and reward, including the right temporoparietal junction (rTPJ), anterior cingulate cortex (ACC), right superior temporal sulcus (rSTS), ventral striatum, and amygdala. Experiment 2 isolated joint attention, a critical component of social interaction. Part icipants either followed the gaze of the live experimenter to a shared target of attention (“Joint Attention”) or found the target of attention alone while the experimenter was visible but not sharing attention (“Solo Attention”). The right temporoparietal junction and right posterior STS were differentially recruited during Joint, as compared to Solo, attention. These findings suggest the rpSTS and rTPJ are key regions for both social interaction and joint attention. This method of allowing online, contingent social interactions in the scanner could open up new avenues of research in social cognitive neuroscience, both in typical and atypical populations. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.sciencedirect.com/science?_ob=MImg&_imagekey=B6WNP-4Y70C6H-6-K&_cdi=6968&_user=29041&_pii=S1053811910000741&_origin=search&_coverDate=05%2F01%2F2010&_sk=999499995&view=c&wchp=dGLbVtz-zSkzk&md5=63fc5d75dc080b797ba36d6b05edc40f&ie=/sdarticle.pdf Biologische Kybernetik Max-Planck-Gesellschaft en 10.1016/j.neuroimage.2010.01.052 ERedcay DDodell-Feder MJPearrow PLMavros kleinermMKleiner JDEGabrieli RSaxe article 6863 RTbox: a device for highly accurate response time measurements Behavior Research Methods 2010 2 42 1 212-225 Although computer keyboards and mice are frequently used in measuring response times (RTs), the accuracy of these measurements is quite low. Specialized RT collection devices must be used to obtain more accurate measurements. However, all the existing devices have some shortcomings. We have developed and implemented a new, commercially available device, the RTbox, for highly accurate RT measurements. The RTbox has its own microprocessor and high-resolution clock. It can record the identities and timing of button events with high accuracy, unaffected by potential timing uncertainty or biases during data transmission and processing in the host computer. It stores button events until the host computer chooses to retrieve them. The asynchronous storage greatly simplifies the design of user programs. The RTbox can also receive and record external signals as triggers and can measure RTs with respect to external events. The internal clock of the RTbox can be synchronized with the computer clock, so the device can be used without external triggers. A simple USB connection is sufficient to integrate the RTbox with any standard computer and operating system. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://lobes.usc.edu/RTbox/ Biologische Kybernetik Max-Planck-Gesellschaft en 10.3758/BRM.42.1.212 XLi ZLiang kleinermMKleiner Z-LLu article 3540 Manipulating video sequences to determine the components of conversational facial expressions ACM Transactions on Applied Perception 2005 7 2 3 251-269 Communication plays a central role in everday life. During an average conversation, information is exchanged in a variety of ways, including through facial motion. Here, we employ a custom, model-based image manipulation technique to selectively "freeze" portions of a face in video recordings in order to determine the areas that are sufficient for proper recognition of nine conversational expressions. The results show that most expressions rely primarily on a single facial area to convey meaning, with different expressions using different areas. The results also show that already the combination of rigid head, eye, eyebrow, and mouth motions is sufficient to produce expressions that are as easy to recognize as the original, unmanipulated recordings. Finally, the results show that the manipulation technique introduced few perceptible artifacts into the altered video sequences. This fusion of psychophysics and computer graphics techniques provides not only fundamental insights into human perception and cognitio n, but also yields the basis for a systematic description of what needs to move in order to produce realistic, recognizable conversational facial animations. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/cunningham-etal-ACM-tap-2005_3540[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://doi.acm.org/10.1145/1077399.1077404 Biologische Kybernetik Max-Planck-Gesellschaft en doi.acm.org/10.1145/1077399.1077404 dwcDCunningham kleinermMKleiner walliCWallraven hhbHBülthoff inproceedings 5230 Probing Dynamic Human Facial Action Recognition From The Other Side Of The Mean 2008 8 59-66 Insights from human perception of moving faces have the potential to provide interesting insights for technical animation systems as well as in the neural encoding of facial expressions in the brain. We present a psychophysical experiment that explores high-level after-effects for dynamic facial expressions. We address specifically in how far such after-effects represent adaptation in neural representation for static vs. dynamic features of faces. High-level after-effects have been reported for the recognition of static faces [Webster and Maclin 1999; Leopold et al. 2001], and also for the perception of point-light walkers [Jordan et al. 2006; Troje et al. 2006]. After-effects were reflected by shifts in category boundaries between different facial expressions and between male and female walks. We report on a new after-effect in humans observing dynamic facial expressions that have been generated by a highly controllable dynamic morphable face model. As key element of our experiment, we created dynamic 'anti-expressions' in analogy to static 'anti-faces' [Leopold et al. 2001]. We tested the influence of dynamics and identity on expression-specific recognition performance after adaptation to 'anti-expressions'. In addition, by a quantitative analysis of the optic flow patterns corresponding to the adaptation and test expressions we rule out that the observed changes reflect a simple low-level motion after-effect. Since we found no evidence for a critical role of temporal order of the stimulus frames we conclude that after-effects in dynamic faces might be dominated by adaptation to the form information in individual stimulus frames. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/APGV08_CurioEtal_small_5230[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://apgv.local/archive/apgv08/ Creem-Regehr, S. H., K. Myszkowski ACM Press
New York, NY, USA
Biologische Kybernetik Max-Planck-Gesellschaft Los Angeles, CA, USA 5th Symposium on Applied Perception in Graphics and Visualization (APGV 2008) en 978-1-59593-981-4 http://doi.acm.org/10.1145/1394281.1394293 curioCCurio gieseMAGiese mbreidtMBreidt kleinermMKleiner hhbHHBülthoff
inproceedings 5405 Exploring Human Dynamic Facial Expression Recognition with Animation 2008 4 1-6 http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff
Karlsruhe, Germany
Biologische Kybernetik Max-Planck-Gesellschaft Karlsruhe, Germany International Conference on Cognitive Systems (CogSys 2008) en curioCCurio gieseMAGiese mbreidtMBreidt kleinermMKleiner hhbHHBülthoff
inproceedings 3992 Semantic 3D motion retargeting for facial animation 2006 7 77-84 We present a system for realistic facial animation that decomposes facial Motion Capture data into semantically meaningful motion channels based on the Facial Action Coding System. A captured performance is retargeted onto a morphable 3D face model based on a semantically corresponding set of 3D scans. The resulting facial animation reveals a high level of realism by combining the high spatial resolution of a 3D scanner with the high temporal accuracy of motion capture data that accounts for subtle facial movements with sparse measurements. Such an animation system allows us to systematically investigate human perception of moving faces. It offers control over many aspects of the appearance of a dynamic face, while utilizing as much measured data as possible to avoid artistic biases. Using our animation system, we report results of an experiment that investigates the perceived naturalness of facial motion in a preference task. For expressions with small amounts of headmotion, we find a benefit for our part-based generative animation system that is capable of local animation over an example-based approach that deforms the whole face at once. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/apgv06-77_3992[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.apgv.org/archive/apgv06/ Fleming, R.W. , S. Kim ACM Press
New York, NY, USA
Biologische Kybernetik Max-Planck-Gesellschaft Boston, MA, USA 3rd Symposium on Applied Perception in Graphics and Visualization (APGV 2006) en 1-59593-429-4 10.1145/1140491.1140508 curioCCurio mbreidtMBreidt kleinermMKleiner qvuongQCVuong gieseMAGiese hhbHHBülthoff
inproceedings 3058 Multi-viewpoint video capture for facial perception research 2004 12 55-60 In order to produce realistic-looking avatars, computer graphics has traditionally relied solely on physical realism. Research on cognitive aspects of face perception, however, can provide insights into how to produce believable and recognizable faces. In this paper, we describe a method for automatically manipulating video recordings of faces. The technique involves the use of a custom-built multi-viewpoint video capture system in combination with head motion tracking and a detailed 3D head shape model. We illustrate how the technique can be employed in studies on dynamic facial expression perception by summarizing the results of two psychophysical studies which provide suggestions for creating recognizable facial expressions. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf3058.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Thalmann, N. , M. D. Thalmann Biologische Kybernetik Max-Planck-Gesellschaft Zermatt, Switzerland Workshop on Modelling and Motion Capture Techniques for Virtual Environments (CAPTECH 2004) en kleinermMKleiner walliCWallraven mbreidtMBreidt dwcDWCunningham hhbHHBülthoff inproceedings 2865 The components of conversational facial expressions 2004 8 143-149 Conversing with others is one of the most central of human behaviours. In any conversation, humans use facial motion to help modify what is said, to control the flow of a dialog, or to convey complex intentions without saying a word. Here, we employ a custom, image-based, stereo motion-tracking algorithm to track and selectively "freeze" portions of an actor or actress's face in video recordings in order to determine the necessary and sufficient facial motions for nine conversational expressions. The results show that most expressions rely primarily on a single facial area to convey meaning, with different expressions using different facial areas. The results also show that the combination of rigid head, eye, eyebrow, and mouth motion is sufficient to produce versions of these expressions that are as easy to recognize as the original recordings. Finally, the results show that the manipulation technique introduced few perceptible artifacts into the altered video sequences. The use of advanced computer graphics techniques provided a means to systematically examine real facial expressions. This provides not only fundamental insights into human perception and cognition, but also yields the basis for a systematic description of what needs to be animated in order to produce realistic, recognizable facial expressions. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2865.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://portal.acm.org/citation.cfm?id=1012578 Interrante, V. , A. McNamara, H.H. Bülthoff, H.E. Rushmeier ACM Press
New York, NY, USA
Biologische Kybernetik Max-Planck-Gesellschaft Los Angeles, CA, USA 1st Symposium on Applied Perception in Graphics and Visualization (APGV 2004) en 1-58113-914-4 10.1145/1012551.1012578 dwcDWCunningham kleinermMKleiner walliCWallraven hhbHHBülthoff
inproceedings 2808 Using facial texture manipulation to study facial motion perception 2004 8 180 Manipulated still images of faces have often been used as stimuli for psychophysical research on human perception of faces and facial expressions. In everyday life, however, humans are usually confronted with moving faces. We describe an automated way of performing manipulations on facial video recordings and how it can be applied to investigate human dynamic face perception. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2808.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.kyb.tuebingen.mpg.de/bu/people/kleinerm/apgv04/ Interrante, V. , A. McNamara, H.H. Bülthoff, H.E. Rushmeier ACM Press
New York, NY, USA
Biologische Kybernetik Max-Planck-Gesellschaft Los Angeles, CA, USA 1st Symposium on Applied Perception in Graphics and Visualization (APGV 2004) en 1-58113-914-4 10.1145/1012551.1012602 kleinermMKleiner aschwanASchwaninger dwcDWCunningham babsyBKnappmeyer
inproceedings 2096 The inaccuracy and insincerity of real faces 2003 9 7-12 Since conversation is a central human activity, the synthesis of proper conversational behavior for Virtual Humans will become a critical issue. Facial expressions represent a critical part of interpersonal communication. Even with the most sophisticated, photo-realistic head model, an avatar who's behavior is unbelievable or even uninterpretable will be an inefficient or possibly counterproductive conversational partner. Synthesizing expressions can be greatly aided by a detailed description of which facial motions are perceptually necessary and sufficient. Here, we recorded eight core expressions from six trained individuals using a method-acting approach. We then psychophysically determined how recognizable and believable those expressions were. The results show that people can identify these expressions quite well, although there is some systematic confusion between particular expressions. The results also show that people found the expressions to be less than convincing. The pattern of confusions and believability ratings demonstrates that there is considerable variation in natural expressions and that even real facial expressions are not always understood or believed. Moreover, the results provide the ground work necessary to begin a more fine-grained analysis of the core components of these expressions. As some initial results from a model-based manipulation of the image sequences shows, a detailed description of facial expressions can be an invaluable aid in the synthesis of unambiguous and believable Virtual Humans. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2096.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Hamza, H.M. Acta Press
Anaheim, CA, USA
Biologische Kybernetik Max-Planck-Gesellschaft Benalmádena, Spain 3rd IASTED International Conference on Visualization, Imaging, and Image Processing (VIIP 2003) 0-88986-382-2 dwcDWCunningham mbreidtMBreidt kleinermMKleiner walliCWallraven hhbHHBülthoff
inproceedings 2022 How believable are real faces? Towards a perceptual basis for conversational animation 2003 5 23-29 Regardless of whether the humans involved are virtual or real, well-developed conversational skills are a necessity. The synthesis of interface agents that are not only understandable but also believable can be greatly aided by knowledge of which facial motions are perceptually necessary and sufficient for clear and believable conversational facial expressions. Here, we recorded several core conversational expressions (agreement, disagreement, happiness, sadness, thinking, and confusion) from several individuals, and then psychophysically determined the perceptual ambiguity and believability of the expressions. The results show that people can identify these expressions quite well, although there are some systematic patterns of confusion. People were also very confident of their identifications and found the expressions to be rather believable. The specific pattern of confusions and confidence ratings have strong implications for conversational animation. Finally, the present results provide the information necessary to begin a more fine-grained analysis of the core components of these expressions. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2022.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1199300 IEEE
Los Alamitos, CA, USA
Biologische Kybernetik Max-Planck-Gesellschaft New Brunswick, NJ, USA 16th International Conference on Computer Animation and Social Agents (CASA 2003) 0-7695-1934-2 10.1109/CASA.2003.1199300 dwcDWCunningham mbreidtMBreidt kleinermMKleiner walliCWallraven hhbHHBülthoff
inbook 5960 Recognition of Dynamic Facial Action Probed by Visual Adaptation 2010 12 47-65 This chapter presents a psychophysical experiment in which 3D computer graphic methods were used to generate close-to-reality facial expressions to examine aspects of recognizing dynamic facial expressions in humans. The study shows that high-level aftereffects similar to those shown earlier for static faces are produced by dynamic faces. The findings indicate that the aftereffects, which are consistent for adaptation with dynamic anti-expressions, are highly expression-specific. The chapter also highlights how computer graphics-generated expressions can be used in order to rule out low-level motion aftereffects. Dynamic face stimuli were created by using a three-dimensional face model that is based on the Facial Action Coding System (FACS). http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://mitpress.universitypressscholarship.com/view/10.7551/mitpress/9780262014533.001.0001/upso-9780262014533-chapter-5 Curio, C. , H. H. Bülthoff, M. A. Giese MIT Press
Cambridge, MA, USA
Dynamic Faces: Insights from Experiments and Computation Biologische Kybernetik Max-Planck-Gesellschaft en 978-0-262-01453-3 10.7551/mitpress/9780262014533.001.0001 curioCCurio gieseMAGiese mbreidtMBreidt kleinermMKleiner hhbHHBülthoff
techreport 2774 The MPI VideoLab: A system for high quality synchronous recording of video and audio from multiple viewpoints 2004 5 123 The MPI VideoLab is a custom built, flexible digital video- and audio recording studio that enables high quality, time synchronized recordings of human actions from multiple viewpoints. This technical report describes the requirements to the system in the context of our applications, its hardware- and software equipment and the special features of the recording setup. Important aspects of the hardware and software implementation are discussed in detail. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2774.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Biologische Kybernetik Max-Planck-Gesellschaft Max Planck Institute for Biological Cybernetics, Tübingen, Germany kleinermMKleiner walliCWallraven hhbHHBülthoff poster SchindlerKB2011 Decoding egocentric space in human posterior parietal cortex using fMRI 2011 11 41 800.21 In our subjective experience, there is a tight link between covert visual attention and ego-centric spatial attention. One key difference is that the latter can extend beyond the visual field, providing us with an accurate mental representation of an object’s location relative to our body position. A neural link between visual and ego-centric spatial attention is suggested by lesions in parietal cortex, that lead not only to deficits in covert visual attention, but frequently also to a disorder of ego-centric spatial awareness, known as hemi-spatial neglect. While parietal involvement in covert visual spatial attention has been much studied, relatively little is known about mental representations of the unseen space around us. In the present study we examined whether also unseen spatial locations beyond the visual field are represented in parietal activity, and how they are related to retinotopic representations. We employed a novel virtual reality (VR) paradigm during functional magnetic resonance imaging (fMRI), whereby observers were prompted to draw their spatial attention to the position of one of eight possible objects located around them in an octagonal room. By changing the observers’ facing direction every few trials, the egocentric location of objects was disentangled from their absolute position and from the objects’ identity. Thus, mental representations of egocentric space surrounding the observer were sampled eight-fold. De-coding results of a multivariate pattern analysis classifier (MVPA), but not univariate results, showed that egocentric spatial directions were specifically represented in parietal cortex. These representations overlapped only partly with visually driven retinotopic activity. Our results thus show that parietal cortex codes not only for retinotopic and visually accessible space, but also for egocentric locations of the three-dimensional space surrounding us, including unseen space. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Logothetis Department Bülthoff http://www.sfn.org/AM2011/ Washington, DC, USA 41st Annual Meeting of the Society for Neuroscience (Neuroscience 2011) aschindlerASchindler kleinermMKleiner abartelsABartels poster SchindlerKB2011_2 Decoding Egocentric Space in human Posterior Parietal Cortex using fMRI 2011 10 12 40 In our subjective experience, there is a tight link between covert visual attention and egocentric spatial attention. One key difference is that the latter can extend beyond the visual field, providing us with an acurate mental representation of an object’s location relative to our body position. A neural link between visual and ego-centric spatial attention is suggested by lesions in parietal cortex, that lead not only to deficits in covert visual attention, but frequently also to a disorder of ego-centric spatial awareness, known as hemi-spatial neglect. While parietal involvement in covert visual spatial attention has been much studied, relatively little is known about mental representations of the unseen space around. In the present study we examined whether also unseen spatial locations beyond the visual field are represented in parietal activity, and how they are related to retinotopic representations. We employed a novel virtual reality (VR) paradigm during functional magnetic resonance imaging (fMRI), whereby observers were prompted to draw their spatial attention to the position of one of eight possible objects located around them in an octagonal room. By changing the observers’ facing direction every few trials, the ego-centric location of objects was disentangled from their absolute position and from the objectsâ identity. Thus, mental representations of egocentric space surrounding the observer were sampled eight-fold. Decoding results of a multivariate pattern analysis classifier (MVPA), but not univariate results, showed that egocentric spatial directions were specifically represented in parietal cortex. These representations overlapped only partly with visually driven retinotopic activity. Our results thus show that parietal cortex codes not only for retinotopic and visually accessible space, but also for ego-centric locations of the three-dimensional space surrounding us, including unseen space. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Logothetis Department Bülthoff Heiligkreuztal, Germany 12th Conference of Junior Neuroscientists of Tübingen (NeNA 2011) aschindlerASchindler kleinermMKleiner abartelsABartels poster GianiEBKPN2011 Steady-state responses in MEG demonstrate information integration within but not across the auditory and visual senses 2011 10 12 27 To form a unified percept of our environment, the human brain integrates information within and across the senses. This MEG study investigated interactions within and between sensory modalities using a frequency analysis of steady-Ââstate responses (SSR) to periodic auditory and/or visual inputs. The 3x3 factorial design, manipulated (1) modality (auditory only, visual only and audiovisual) and (2) temporal dynamics (static, dynamic1 and dynamic2). In the static conditions, subjects were presented with (1) visual gratings, luminance modulated at 6Hz and/or (2) pure tones, frequency modulated at 40 Hz. To manipulate perceptual synchrony, we imposed additional slow modulations on the auditory and visual stimuli either at same (0.2 Hz = synchronous) or different frequencies (0.2 Hz vs. 0.7 Hz = asynchronous). This also enabled us to investigate the integration of two dynamic features within one sensory modality (e.g. a pure tone frequency modulated at 40Hz & amplitude modulated at 0.2Hz) in the dynamic conditions. We reliably identified crossmodulation frequencies when these two stimulus features were modulated at different frequencies. In contrast, no crossmodulation frequencies were identified when information needed to be combined from auditory and visual modalities. The absence of audiovisual crossmodulation frequencies suggests that the previously reported audiovisual interactions in primary sensory areas may mediate low level spatiotemporal coincidence detection that is prominent for stimulus transients but less relevant for sustained SSR responses. In conclusion, our results indicate that information in SSRs is integrated over multiple time scales within but not across sensory modalities at the primary cortical level. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Research Group Noppeney Heiligkreuztal, Germany 12th Conference of Junior Neuroscientists of Tübingen (NeNA 2011) gianiAGiani OErick PBelardinelli kleinermMKleiner HPreissl unoppeUNoppeney poster DobsKBSC2011 Investigating idiosyncratic facial dynamics with motion retargeting Perception 2011 9 40 ECVP Abstract Supplement 115 3D facial animation systems allow the creation of well-controlled stimuli to study face processing. Despite this high level of control, such stimuli often lack naturalness due to artificial facial dynamics (eg linear morphing). The present study investigates the extent to which human visual perception can be fooled by artificial facial motion. We used a system that decomposes facial motion capture data into time courses of basic action shapes (Curio et al, 2006 APGV 1 77–84). Motion capture data from four short facial expressions were input to the system. The resulting time courses and five approximations were retargeted onto a 3D avatar head using basic action shapes created manually in Poser. Sensitivity to the subtle modifications was measured in a matching task using video sequences of the actor performing the corresponding expressions as target. Participants were able to identify the unmodified retargeted facial motion above chance level under all conditions. Furthermore, matching performance for the different approximations varied with expression. Our findings highlight the sensitivity of human perception for subtle facial dynamics. Moreover, the action shape-based system will allow us to further investigate the perception of idiosyncratic facial motion using well-controlled facial animation stimuli. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://pec.sagepub.com/content/40/1_suppl.toc Toulouse, France 34th European Conference on Visual Perception 10.1177/03010066110400S102 kdobsKDobs kleinermMKleiner isaIBülthoff johannesJSchultz curioCCurio poster GianiOBKPN2011 Using steady state responses in MEG to study information integration within and across the senses 2011 6 1028 How does the brain integrate information within and across sensory modalities to form a unified percept? This question has previously been addressed using transient stimuli, analyzed in the time domain. Alternatively, sensory interactions can be investigated using frequency analyses of steady state responses (SSRs). SSRs are elicited by periodic sensory stimulation (such as frequency modulated tones). In the frequency domain, 'true' signal integration is reflected by non-linear crossmodulation terms (i.e. the sums and differences of the individual SSR frequencies). In addition, two signals may modulate the amplitude of the fundamental and harmonic frequencies of one another. Using visual (V) and auditory (A) SSRs, we investigated whether A and V signals are truly integrated as indexed by crossmodulation terms or simply modulate the expression of each other's dominant frequencies. To manipulate perceptual synchrony, we imposed additional slow modulations on the auditory and visual SSRs either at same or different frequencies. This also enabled us to investigate the integration of two dynamic features within one sensory modality. http://www.kyb.tuebingen.mpg.defileadmin/user_upload/files/publications/2011/HBM-2011-Giani.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Research Group Noppeney http://www.humanbrainmapping.org/i4a/pages/index.cfm?pageID=3419 Québec City, Canada 17th Annual Meeting of the Organization for Human Brain Mapping (HBM 2011) gianiASGiani EBOrtiz PBelardinelli kleinermMKleiner HPreissl unoppeUNoppeney poster 7089 Visual stimulus timing precision in Psychtoolbox-3: Tests, pitfalls & solutions 2010 10 11 12 32 Visual stimulation paradigms in perception research often require accurate timing for presentation of visual stimuli. Acquisition of exact stimulus update timestamps in realtime is often crucial, both for synchronization of stimulus updates between different presentation modalities and for logging. Modern graphics hardware, multi-core processors and operating systems provide a far higher level of functionality, flexibility, and performance in terms of throughput, than systems a decade ago. They also pose new challenges for precise presentation timing or timestamping. Typical causes of interference are, eg the dynamic power management of modern graphics cards and computers, novel hybrid graphics solutions, user interface desktop composition and the properties of graphics- and CPU-scheduling of the latest generation of operating systems. This work presents results for the accuracy and robustness of visual presentation timing and timestamping tests, conducted within Psychtoolbox-3 (Kleiner et al, 2007 Perception 36 ECVP Supplement, 14) on different operating systems and graphics cards under realistic stimulus presentation loads. It explains some of the common pitfalls one can encounter when trying to achieve exact timing and some methods to avoid timing problems or reduce their severeness. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Biologische Kybernetik Max-Planck-Gesellschaft Heiligkreuztal, Germany 11th Conference of Junior Neuroscientists of Tübingen (NeNa 2010) en kleinermMKleiner poster 6862 Visual stimulus timing precision in Psychtoolbox-3: Tests, pitfalls and solutions Perception 2010 8 39 ECVP Abstract Supplement 189 Visual stimulation paradigms in perception research often require accurate timing for presentation of visual stimuli. Acquisition of exact stimulus update timestamps in realtime is often crucial, both for synchronization of stimulus updates between different presentation modalities and for logging. Modern graphics hardware, multi-core processors and operating systems provide a far higher level of functionality, flexibility, and performance in terms of throughput, than systems a decade ago. They also pose new challenges for precise presentation timing or timestamping. Typical causes of interference are, eg the dynamic power management of modern graphics cards and computers, novel hybrid graphics solutions, user interface desktop composition and the properties of graphics- and CPU-scheduling of the latest generation of operating systems. This work presents results for the accuracy and robustness of visual presentation timing and timestamping tests, conducted within Psychtoolbox-3 (Kleiner et al, 2007 Perception 36 ECVP Supplement, 14) on different operating systems and graphics cards under realistic stimulus presentation loads. It explains some of the common pitfalls one can encounter when trying to achieve exact timing and some methods to avoid timing problems or reduce their severeness. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://pec.sagepub.com/content/39/1_suppl.toc Biologische Kybernetik Max-Planck-Gesellschaft Lausanne, Switzerland 33rd European Conference on Visual Perception en 10.1177/03010066100390S101 kleinermMKleiner poster 6570 Audio-visual interactions in binocular rivalry using the Shepard illusion in the auditory and visual domain 2010 6 11 229 When both eyes are presented with dissimilar images, human observers report alternating percepts - a phenomenon known as binocular rivalry. Subjects were presented dichoptically with (1) a looming/receding starfield or (2) a looming/receding Shepard Zoom (Berger, Siggraph 2003), the visual equivalent of the Shepard tone illusion. In four psychophysical experiments, we investigated the influence of (1) a real complex tone rising/falling in pitch and (2) rising/falling Shepard tones on the dominance and suppression times of the rivaling visual motion percepts (relative to non-motion sounds or no sounds). First, we observed longer dominance times of looming than receding visual percepts even in the absence of sound. Second, auditory looming signals enhanced this looming bias by lengthening the dominance periods of their congruent visual looming percept. Third, receding auditory motion signals reduced the perceptual looming bias, though this effect was less pronounced and not consistently observed. Collectively, the results show that the perceptual predominance of looming relative to receding visual motion is amplified by congruent looming/receding auditory signals during binocular rivalry. Auditory looming/receding signals may influence the dominance times of their congruent and incongruent visual percepts via genuine multisensory and higher order attentional mechanisms at multiple levels of the cortical hierarchy. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://imrf.mcmaster.ca/IMRF/ocs2/index.php/imrf/2010/paper/view/229 Biologische Kybernetik Max-Planck-Gesellschaft Liverpool, UK 11th International Multisensory Research Forum (IMRF 2010) en conradVConrad kleinermMKleiner jhartcherJHartcher-O&lsquo;Brien abartelsABartels hhbHHBülthoff unoppeUNoppeney poster RedcayDPKMWGS2010 Do you see what I see? The neural bases of joint attention during a live interactive game Journal of Cognitive Neuroscience 2010 4 18 22 Supplement 71 Joint attention refers to the ability to coordinate one’s own attention with another on a third entity (e.g. object or common goal). This uniquely human ability emerges late in the first year of life and is critical to social-cognitive and language development; yet the neural bases for this pivotal skill remain largely understudied. Joint attention includes both Responding to Joint Attention (RJA), or following another’s bid for shared attention on an object, and Initiating Joint Attention (IJA), or initiating a bid for shared attention on an object. To identify the neural bases of both IJA and RJA we implemented a dual-video set-up in which both subject and experimenter could monitor each other via video feed in real-time during fMRI data collection. In each trial, participants either followed the experimenter’s gaze to a target (RJA) or cued the experimenter to look at the target (IJA). A control condition, non-joint attention (NJA), was included in which the subject shifted gaze to a target while the experimenter closed her eyes. Greater activation was seen in the dorsal medial prefrontal cortex (dMPFC) and bilateral posterior superior temporal sulcus (pSTS) during joint attention (IJA + RJA) as compared to NJA. RJA elicited greater activation in posterior superior temporal sulcus (pSTS) than NJA while IJA recruited greater activation in dMPFC than NJA. This novel experimental set-up allowed for the first time identification of the neural bases of both initiating and responding to joint attention. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://cogneurosociety.org/annual-meeting/previous-meetings/CNS2010_Program.pdf/view Montréal, Canada 17th Annual Meeting of the Cognitive Neuroscience Society (CNS 2010) ERedcay DDodell-Feder MJPearrow kleinermMKleiner PLMavros JWang JDEGabrieli RSaxe poster 6241 The Virtual Face Mirror Project: Revealing Dynamic Self-Perception in Humans 2010 1 4 137 http://www.kyb.tuebingen.mpg.defileadmin/user_upload/files/publications/CogSys-2010-0137.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://cogsys2010.ethz.ch/proceedings.html Biologische Kybernetik Max-Planck-Gesellschaft Zürich, Switzerland 4th International Conference on Cognitive Systems (CogSys 2010) en curioCCurio kleinermMKleiner mbreidtMBreidt hhbHHBülthoff poster 5949 Auditory influences on the temporal dynamics of binocular rivalry 2009 7 10 590 102-103 Introduction When the two eyes are presented with dissimilar images, human observers report alternating percepts – a phenomenon coined binocular rivalry. These perceptual fluctuations reflect competition between the two visual inputs both at lower, monocular and at binocular, higher-level processing stages. Even though perceptual transitions occur stochastically over time, their temporal dynamics can be modulated by changes in stimulus strength, context and attention. While increases in stimulus strength (such as contrast) primarily abbreviate suppression phases of a percept, attentional and contextual factors predominantly lengthen its dominance periods. Goals This project investigates the influence of concurrent auditory stimulation on the temporal dynamics of binocular rivalry. In two psychophysics studies, we investigated whether sounds that provide directionally congruent, incongruent or no motion information modulate the dominance periods of rivaling visual motion percepts. Methods In the first psychophysics study, observers dichoptically viewed random-dot kinematograms (RDK) at 0% motion coherence in one eye and 50% in the other in a stereoscope, while being concurrently presented with directionally congruent auditory motion, noise and no sound. In the second psychophysics study, they viewed two RDKs of opposite motion directions at 100% coherence, with the auditory motion stimulus being directionally congruent with one of the two rivaling motion percepts. In both experiments, congruent auditory motion was temporally synchronized with visual motion to facilitate audio-visual integration into a coherent percept. Initial results Both experiments consistently revealed a statistically significant influence of sound on perceptual dominance times. In the first experiment, directionally congruent auditory motion but not noise increased the duration of the dominance phases of the RDK at 50% motion coherence. In the second experiment, auditory motion lengthened the dominance periods of the directionally congruent 100% RDK and abbreviated those of the directionally incongruent 100% RDK. Initial conclusions The results demonstrate that auditory stimuli influence the temporal dynamics of binocular rivalry. Auditory motion lengthened the dominance periods of a visual motion percept when it was directionally congruent, but shortened them when it was directionally incongruent. Thus, the (in)congruency of auditory motion primarily influences the duration of the dominance periods similar to purely visual contextual effects, even though a small effect was also observed on the suppression periods. In conclusion, the human brain draws on information from multiple senses to arbitrate between multiple rivaling perceptual interpretations. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Research Group Noppeney http://imrf.mcmaster.ca/IMRF/ocs/index.php/meetings/2009/paper/view/590 Biologische Kybernetik Max-Planck-Gesellschaft New York, NY, USA 10th International Multisensory Research Forum (IMRF 2009) en conradVConrad abartelsABartels kleinermMKleiner unoppeUNoppeney poster 5265 The prefrontal cortex accumulates object evidence through differential connectivity to the visual and auditory cortices NeuroImage 2008 6 41 Supplement 1 S150 http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Research Group Noppeney http://www.sciencedirect.com/science/article/pii/S1053811908003133 Biologische Kybernetik Max-Planck-Gesellschaft Melbourne, Australia 14th Annual Meeting of the Organization for Human Brain Mapping (HBM 2008) en 10.1016/j.neuroimage.2008.04.008 unoppeUNoppeney ostwalddDOstwald kleinermMKleiner sebastianwernerSWerner poster 4370 Aftereffects in the recognition of dynamic facial expressions Perception 2007 8 36 ECVP Abstract Supplement 146 High-level aftereffects have previously been reported for the recognition of static faces. We present an experiment showing for the first time high-level aftereffects for dynamic facial expressions. Facial expressions were generated as a morph animation based on a weighted sum of 3-D shapes derived from scans of facial action units [Curio et al 2006, in Proceedings of the 3rd Symposium on Applied Perception in Graphics and Visualization (New York: ACM Press) pp 77 - 84]. With this technique we produced dynamic happy and disgust expressions. By changing the sign of the morph weights we were able to obtain &amp;lsquo;anti-expressions&amp;lsquo;. Participants observed dynamic anti-expressions for 8 s. Immediately after each adaptation phase, recognition performance was tested for the original expressions (2AFC, reduced expression strength). Adaptation stimuli were chosen from two identities and were shown either in forward or reverse time order. We found strong expression-related af tereffec ts (increased recognition for matching expression stimuli, p&amp;lt;0.05, N=13), which depended also on the match between the identities of adaptation and test face. We are currently investigating the influence of static vs dynamic representations in the observed aftereffect. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://pec.sagepub.com/content/36/1_suppl.toc Biologische Kybernetik Max-Planck-Gesellschaft Arezzo, Italy 30th European Conference on Visual Perception en 10.1177/03010066070360S101 curioCCurio gieseMAGiese mbreidtMBreidt kleinermMKleiner hhbHHBülthoff poster 4884 Perception of Dynamic Facial Expressions Probed by a New High-Level After-Effect 2007 7 10 111 High-level after-effects have been reported for the recognition of static faces [1,2]. It has been shown that the presentation of static ‘anti-faces’ biases the perception of neutral test faces temporarily towards the perception of specific identities. Recent studies have demonstrated high-level after-effects also for point-light walkers, resulting in shifts of perceived gender [3,4]. We present an experiment showing for the first time high-level after-effects in the recognition of dynamic facial expressions. Facial expressions were generated as a morph animation based on a weighted sum of 3D shapes derived from scans of facial action units [5]. With this technique we were able to define a metric space of dynamic expressions by morphing, similar to face spaces for static stimuli. Morphing between prototypical expressions (happy and disgust) and a neutral face without intrinsic facial motion we generated ‘anti-expressions’ by choosing negative weights for the prototypes. In addition, for testing we generated expressions with reduced recognizability choosing small positive weights of the prototypes. The morphing space was equilibrated for recognizability by measuring the psychometric functions that map the morphing weights on the recognition rates of the two expressions (happy and disgust) in a 2 AFC task. Only the non-rigid intrinsic face motion was morphed. In addition, a meaningless 3D head motion was added in order to minimize the influence of low-level adaptation effects. Subjects were adapted for 8s with 5 repetitions of the anti-expressions. They were tested with happy and disgust expressions with reduced expression strength. Adaptation stimuli were simulated with 2 facial identities and were shown either in forward or reverse time order. We found strong expression-related after-effects (increased and decreased recognition for matching and non-matching expression, respectively, p < 0.05, N=13). We investigated the influence of static vs. dynamic representations in the observed after-effect. The temporal order of the adapting stimuli does not have a significant influence on the strength of the observed after-effect. The analysis of the 2D optic flow patterns of adaptation and test stimuli rules out the possibility that the observed after-effects reflect classical low-level motion after effects. Instead, the results seem compatible with the adaptation of neural representations of ‘snapshot keyframes’ [6] that arise during the presentation of dynamic facial expressions. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.twk.tuebingen.mpg.de/twk07/abstract.php?_load_id=curio01 Biologische Kybernetik Max-Planck-Gesellschaft Tübingen, Germany 10th Tübinger Wahrnehmungskonferenz (TWK 2007) en curioCCurio gieseMAGiese mbreidtMBreidt kleinermMKleiner hhbHHBülthoff poster 4566 Accumulation of object evidence from multiple senses NeuroImage 2007 6 36 Supplement 1 S109 http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/Poster_HBM2007_noppeney_[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Research Group Noppeney http://www.sciencedirect.com/science/article/pii/S1053811907002789 Biologische Kybernetik Max-Planck-Gesellschaft Chicago, IL, USA 13th Annual Meeting of the Organization for Human Brain Mapping (HBM 2007) en 10.1016/j.neuroimage.2007.03.045 unoppeUNoppeney ostwalddDOstwald kleinermMKleiner sebastianwernerSWerner poster 4815 High-level after-effects in the recognition of dynamic facial expressions Journal of Vision 2007 6 7 9 994 Strong high-level after-effects have been reported for the recognition of static faces (Webster et al. 1999; Leopold et al. 2001). Presentation of static ‘anti-faces’ biases the perception of neutral test faces temporarily towards perception of specific identities or facial expressions. Recent experiments have demonstrated high-level after-effects also for point-light walkers, resulting in shifts of perceived gender. Our study presents first results on after-effects for dynamic facial expressions. In particular, we investigated how such after-effects depend on facial identity and dynamic vs. static adapting stimuli. STIMULI: Stimuli were generated using a 3D morphable model for facial expressions based on laser scans. The 3D model is driven by facial motion capture data recorded with a VICON system. We recorded data of two facial expressions (Disgust and Happy) from an amateur actor. In order to create ‘dynamic anti-expressions’ the motion data was projected onto a basis of 17 facial action units. These units were parameterized by motion data obtained from specially trained actors, who are capable of executing individual action units according to FACS (Ekman 1978). Anti-expressions were obtained by inverting the vectors in this linear projection space. METHOD: After determining a baseline-performance for expression recognition, participants were adapted with dynamic anti-expressions or static adapting stimuli (extreme keyframes of same duration), followed by an expression recognition test. Test stimuli were Disgust and Happy with strongly reduced expression strength (corresponding to vectors of reduced length in linear projection space). Adaptation and test stimuli were derived from faces with same or different identities. RESULTS: Adaptation with dynamic anti-expressions resulted in selective after-effects: increased recognition for matching test stimuli (p [[lt]] 0.05, N=13). Adaptation effects were significantly reduced for static adapting stimuli, and for different identities of adapting and test face. This suggests identity-specific neural representations of dynamic facial expressions. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.journalofvision.org/7/9/994/ Biologische Kybernetik Max-Planck-Gesellschaft Sarasota, FL, USA 7th Annual Meeting of the Vision Sciences Society (VSS 2007) en 10.1167/7.9.994 curioCCurio gieseMAGiese mbreidtMBreidt kleinermMKleiner hhbHHBülthoff poster SchwaningerKCHK2006 Recognition of emotion in moving and static composite faces Perception 2006 8 35 ECVP Abstract Supplement 212 We investigated the role of holistic processing for the perception of facial emotion and its interaction with non-rigid motion. Using an experimental paradigm by Young et al reported in 1987, we tested recognition performance of aligned and misaligned composite faces with six basic emotions (happiness, fear, disgust, surprise, anger, sadness). Stimuli were shown as 3-D animated realistic video sequences (moving condition) and as static peak expressions (static condition). The results (N=24) revealed that misaligned composites were better recognised than aligned composites, both for static and moving stimuli. When the two halves were aligned, a new emotion resembling each of the two originals seemed to emerge, suggesting holistic processing. This made it very difficult to identify the emotions from either half. When the top and bottom halves were misaligned horizontally (impairment of holistic processing), the two halves did fuse significantly less to create a new emotion, and the constituent halves remained identifiable. Whereas moving stimuli were better recognised than static faces, there was no interaction between motion and alignment. These results indicate that facial-expression processing is holistic in static and moving faces to a similar degree. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://pec.sagepub.com/content/35/1_suppl.toc St. Petersburg 29th European Conference on Visual Perception 10.1177/03010066060350S101 aschwanASchwaninger kleinermMKleiner SChiller-Glaus FHofer babsyBKnappmeyer poster 2809 Recognising Famous Gaits Perception 2004 9 33 ECVP Abstract Supplement 99 We are currently developing a semi-automatic system for the reconstruction of 3-D human movement data from complex natural scenes, such as movie clips. As part of this project we have collected a database of walking patterns from twenty well-known male actors. The goal of this initial study was to assess whether isolated 2-D motion cues (point-light walkers created via manual tagging) could provide sufficient information for the recognition of famous gaits. Previous research has indicated that familiar individuals (eg work colleagues) can be recognised as point-light figures. Does memory for famous individuals also include characteristic movement patterns? Observers were shown point-light animations depicting several step-cycles from different actors, filmed from approximately 3/4 view. As the animations were extracted from commercial footage, exact camera position, gait cycle (eg number of repeated steps), and extraneous behaviours (eg additional hand movements) could not be controlled. Animations were, however, approximately normalised for size, and the global translation cues were removed. Using both a direct 6-alternative face-to-gait matching test and a standard 2-alternative forced-choice task, we found levels of performance that did not differ from chance. Item analysis revealed that neither self-reported familiarity with the actors nor confidence ratings provided accurate predictors of performance. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2809.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://pec.sagepub.com/content/33/1_suppl.toc Biologische Kybernetik Max-Planck-Gesellschaft Budapest, Hungary 27th European Conference on Visual Perception en 10.1068/ecvp04a kleinermMKleiner qvuongQCVuong hhbHHBülthoff ianIMThornton poster 2014 Moving the Thatcher Illusion 2002 11 21 Inverting the eyes and the mouth within the facial context produces a bizarre facial expression when the face is presented upright but not when the face is inverted (Thatcher illusion, Thompson, 1980). In the present study we investigated whether this illusion is part-based or holistic and whether motion increases bizarreness. Static upright Thatcher faces were rated more bizarre than the eyes and mouth presented in isolation suggesting an important role of context and holistic processing. As expected, inverted facial stimuli were perceived much less bizarre. Interestingly, isolated parts were more bizarre than the whole thatcherized face when inverted. Adding motion to the smiling thatcherized faces increased bizarreness in all conditions (parts vs. whole, upright vs. inverted). These results were replicated in a separate experiment with talking instead of smiling faces and are discussed within an integrative model of face processing. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.opam.net/archive/opam2002/OPAM2002Abstracts.pdf Biologische Kybernetik Max-Planck-Gesellschaft Max Planck Institute for Biological Cybernetics Kansas City, KS, USA 10th Annual Workshop on Object Perception and Memory (OPAM 2002) en aschwanASchwaninger dwcDWCunningham kleinermMKleiner thesis 1321 Ein stereobasiertes Verfahren zum dreidimensionalen Tracking von Markern in menschlichen Gesichtern 2001 8 89 http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Biologische Kybernetik Max-Planck-Gesellschaft Eberhard-Karls-Universität Tübingen, Wilhelm-Schickard-Institut für Informatik Diplom kleinermMKleiner conference Kleiner2013 B1: Introduction to Matlab and PsychophysicsToolbox Perception 2013 8 25 42 ECVP Abstract Supplement 2 Psychtoolbox-3 is a cross-platform, free and open-source software toolkit for the Linux, MacOSX and Windows operating systems. It extends the GNU/Octave and Matlab programming environments with functionality that allows to conduct neuroscience experiments in a relatively easy way, with a high level of flexibility, control and precision. It has a number of coping mechanisms to diagnose and compensate for common flaws found in computer operating systems and hardware. It also takes unique advantage of the programmability of modern graphics cards and of low-level features of other computer hardware, operating systems and open-source technology to simplify many standard tasks, especially for realtime generation and post-processing of dynamic stimuli. This tutorial aims to provide an introduction into the effective use of Psychtoolbox. However, participants of the tutorial are encouraged to state their interest in specific topics well ahead of time, so i can try to tailor large parts of the tutorial to the actual interests of the audience if there happens to be clusters of common wishes. Ideally this will be interactive rather than a lecture. Wishes can be posted to the issue tracker at GitHub ( https://github.com/kleinerm/Psychtoolbox-3/issues/new ) with the label [ecvp2013], or via e-mail to mario.kleiner.de@gmail.com with the subject line [ecvp2013ptb]. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Invited Lecture http://pec.sagepub.com/content/42/1_suppl.toc Bremen, Germany 36th European Conference on Visual Perception (ECVP 2013) 10.1177/03010066130420S101 kleinermMKleiner conference Kleiner2011 Fairy tales & horror stories: Common misconceptions and traps about use of computers for psychophysical testing 2011 10 12 10 This talk will describe some very common misconceptions that cognitive neuroscientists frequently express about the use of computers and related equipment for visual or auditory stimulus presentation, response collection and the timing precision and behaviour of computers and common operating systems in general. Some typical examples are assumptions about the suitability of LCD flat panels for timed and controlled visual stimulation, naive use of standard sound cards for timed auditory stimulation, and the use of keyboards and mice for reaction time measurements. The talk will try to point out solutions or remedies for some problems where available. The examples are based on an informal sampling of questions asked and misconceptions often encountered on the Psychtoolbox user forum. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Abstract Talk Heiligkreuztal, Germany 12th Conference of Junior Neuroscientists of Tübingen (NeNA 2011) kleinermMKleiner conference 5264 The prefrontal cortex accumulates object evidence through differential connectivity to the visual and auditory cortices 2008 7 9 189 118 To form categorical decisions about objects in our environment, the human brain accumulates noisy sensory information over time till a decisional threshold is reached. Combining fMRI and Dynamic Causal Modelling (DCM), we investigated how the brain accumulates evidence from the auditory and visual senses through distinct interactions amongst brain regions. In a visual selective attention paradigm, subjects categorized visual action movies while ignoring their accompanying soundtracks that were semantically congruent or incongruent. Both, auditory and visual information could be intact or degraded. Reaction times as a marker for the time to decisional threshold accorded with random walk models of decision making. At the neural level, incongruent auditory sounds induced amplification of the task-relevant visual information in the occipito-temporal cortex. Importantly, only the left inferior frontal sulcus (IFS) showed an activation pattern of an accumulator region i.e. (i) positive reactiontime and (ii) incongruency effects that were increased for unreliable (=degraded) visual and interfering reliable (=intact) auditory information, which -based on our DCM analysis- were mediated by increased forward connectivity from visual regions. Thus, to form interpretations and decisions that guide behavioural responses, the IFS may accumulate multi-sensory evidence over time through dynamic weighting of its connectivity to auditory and visual regions. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Research Group Noppeney Abstract Talk http://imrf.mcmaster.ca/IMRF/2008/pdf/FullProgramIMRF08.pdf Biologische Kybernetik Max-Planck-Gesellschaft Max Planck Institute for Biological Cybernetics Hamburg, Germany 9th International Multisensory Research Forum (IMRF 2008) en unoppeUNoppeney ostwalddDOstwald sebastianwernerSWerner kleinermMKleiner conference 5490 What's new in Psychtoolbox-3? Perception 2007 8 36 ECVP Abstract Supplement 14 http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/ECVP2007-Kleiner-slides_5490[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Abstract Talk http://pec.sagepub.com/content/36/1_suppl.toc Biologische Kybernetik Max-Planck-Gesellschaft Arezzo, Italy 30th European Conference on Visual Perception en 10.1177/03010066070360S101 kleinermMKleiner DBrainard DPelli conference ChillerGlausKHKS2006 Erkennung von emotionalen Gesichtsasudrücken in bewegten und statischen Gesichtern 2006 3 48 220 http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Abstract Talk https://www.teap.de/memory/Abstractband_48_2006_mainz.pdf Mainz, Germany 48. Tagung Experimentell Arbeitender Psychologen (TeaP 2006) SChiller-Glaus kleinermMKleiner FHofer babsyBKnappmeyer aschwanASchwaninger