Project Leaders

 Dr. Isabelle Bülthoff
Phone: +49 7071 601-611
Fax: +49 7071 601-616
isabelle.buelthoff[at]tuebingen.mpg.de
 
 

RecCat Overview Poster


Five most recent publications

Bieg H-J Person, Chuang LL Person, Bülthoff HH Person and Bresciani J-P Person (September-2015) Asymmetric saccade reaction times to smooth pursuit Experimental Brain Research 233(9) 2527-2538.
Fademrecht L Person, Barraclough NE , Bülthoff I Person and de la Rosa S Person (August-26-2015): Seeing actions in the fovea influences subsequent action recognition in the periphery, 38th European Conference on Visual Perception (ECVP 2015), Liverpool, UK.
Bülthoff I Person, Mohler B Person and Thornton IM Person (August-24-2015): Active and passive exploration of faces, 38th European Conference on Visual Perception (ECVP 2015), Liverpool, UK.
Zhao M Person, Bülthoff HH Person and Bülthoff I Person (July-2015) A shape-based account for holistic face processing Journal of Experimental Psychology: Learning, Memory, and Cognition . accepted
Chuang LL Person (July-2015) Error Visualization and Information-Seeking Behavior for Air-Vehicle Control In: Foundations of Augmented Cognition, 9th International Conference on Augmented Cognition (AC 2015), held as part of HCI International 2015, Springer International Publishing, Cham, Switzerland, 3-11.

Export as:
BibTeX, XML, Pubman, Edoc, RTF

All RecCat publications

For all publications by RecCat members, click here

 

Dynamic stimuli: Emotional expressions in general are rather easily recognized in static displays, but they represent only 10 percent of all human facial expressions. In contrast, we have shown that dynamic information conveyed in conversational expressions (eg., “I don’t understand” ) is essential for correct interpretations of those expressions [Kaulard]. Another aspect of facial movements is how they help us to recognize the identity of a person. Facial movements of several persons were recorded and these movements were retargeted to avatar faces to investigate the role of motion idiosyncrasy for the recognition of face identity [Dobs]. Using functional brain imaging, we found that the increased response to moving compared to static faces is the result of both the fluid facial motion itself as well as the increase in static information [Schultz]. In the domain of dynamic objects, we could show that brain regions showed neural activity varying in parallel with the degree to which a single moving dot was perceived as inanimate or animate, suggesting that biological motion processing and detection of animacy are implemented in different brain areas [Schultz].
 
Active observers: View generalization for faces was tested in a virtual setup that allowed participants to move freely and collect actively numerous views of sitting and standing avatars that they encountered. Despite that, view generalization along the pitch axis did not occur for avatar faces, indicating that view dependency is not the consequence of passive viewing [I. Bülthoff]. Furthermore, data collected from active observers manipulating virtual objects revealed that observers rely on non-accidental properties of the objects (i.e., symmetry and elongation) to select views that they used for subsequent recognition [Chuang].
 
Cross-modal: Using haptic objects (plastic 3D face masks and sea shells) we have shown that training in one sensory modality can transfer to another modality, resulting in improved discrimination performance [Dopjans, Gaißert]. This transfer indicates that vision and touch share similar representations.
 
Other populations: In accordance with previous studies, we have shown that Korean observers fixate faces differently than their German counterparts, but interestingly, despite those differences in gaze movements, we found no effect of own-face expertise in terms of accuracy performance when the tasks did not involve memorizations of facial identity [I. Bülthoff]. We have started to investigate a large population of congenital Prosopagnosics. Preliminary results indicate that they are less sensitive to configural than to featural changes in faces that were modified parametrically. The use of a parametric stimulus space will allow a sensitive and precise quantitative description of their deficits [Esins]. Using parametric motion stimuli varying in complexity from simple translational motion to interacting shapes, we found that people with Autism Spectrum Disorder show deficits in assessing social interactions between moving objects [Schultz]. Faces were also used in a project to give a definite answer to the controversy about whether faces are perceived categorically in terms of their sex or not. Using our face database and the morphable model to manipulate the face stimuli to obtain very similar faces in terms of perceived masculinity and feminity allowed us to show that sex is not perceived categorically [Armann].
 


Last updated: Thursday, 23.10.2014