Projektleiter

 Öffnet einen externen Link in einem neuen FensterDr. Isabelle Bülthoff
Phone: +49 7071 601-611
Fax: +49 7071 601-616
isabelle.buelthoff[at]tuebingen.mpg.de
 
 

RecCat-Poster


Aktuellste Veröffentlichungen

Chuang LL (November-5-2015) Invited Lecture: Beyond Steering in Human-Centered Closed-Loop Control, Institute for Neural Computation: INC Chalk Talk Series, San Diego, CA, USA.
CiteID: Chuang2015_3
Stangl M, Meilinger T, Pape A-A, Schultz J, Bülthoff HH und Wolbers T (Oktober-19-2015): Triggers of entorhinal grid cell and hippocampal place cell remapping in humans, 45th Annual Meeting of the Society for Neuroscience (Neuroscience 2015), Chicago, IL, USA.
CiteID: StanglMPSBW2015
Fademrecht L, Bülthoff I, Barraclough NE und de la Rosa S (Oktober-18-2015): The spatial extent of action sensitive perceptual channels decrease with visual eccentricity, 45th Annual Meeting of the Society for Neuroscience (Neuroscience 2015), Chicago, IL, USA.
CiteID: FademrechtBBd2015_2
Scheer M, Bülthoff HH und Chuang LL (Oktober-2015) On the influence of steering on the orienting response In: Trends in Neuroergonomics, , 11. Berliner Werkstatt Mensch-Maschine-Systeme, Universitätsverlag der TU Berlin, Berlin, Germany, 24.
CiteID: ScheerBC2015_3
Chuang L (September-16-2015) Invited Lecture: Non-obtrusive measurements of attention and workload in steering, DSC 2015 Europe: Driving Simulation Conference & Exhibition, Tübingen, Germany.
CiteID: Chuang2015_2

Export als:
BibTeX, XML, pubman, Edoc, RTF

Alle RecCat Veröffentlichungen

For all publications by RecCat members, click here

 

Erkennen und Kategorisieren

Diese Büroszene veranschaulicht mehrere der Probleme, die wir lösen müssen, wenn wir Objekte erkennen. Zum Beispiel haben wir keine Probleme, die Möbel in dieser Szene als Stühle oder Schreibtische zu erkennen, obwohl manche nur teilweise sichtbar sind, und die Stühle aus unterschiedlichen Ansichten zu sehen sind.
Je nach Aufgabe können wir Objekte auf verschiedenen Ebenen erkennen. Zum Beispiel kann ein Tier als Angehöriger einer Gruppe, z. B. als "Hund" erkannt werden, oder individuell als "mein Hund Bashi". Das Ziel der RECCAT Gruppe (Recognition and Categorization) ist, die zugrundeliegenden Mechanismen dieser beiden Arten von Aufgaben, die wir scheinbar mühelos und kontinuierlich ausführen, herauszufinden (zu identifizieren).
 
Aufgrund ihrer gesellschaftlichen Bedeutung und ihrer sehr hohen Ähnlichkeit innerhalb ihrer Gruppe bilden Gesichter eine sehr interessante Objektklasse. Darüber hinaus geben sie eine Vielzahl von statischen und dynamischen visuellen Informationen (z. B. Alter oder Mimik), die wir - als natürliche Experten - für eine Vielzahl von Aufgaben leicht extrahieren und verwenden können. Zusammengefasst sind Gesichter ideal für Forschung über Kategorisierung und Objekterkennung.
Derzeit untersuchen wir hauptsächlich, (1) wie statische und dynamische Eigenschaften von Gesichtern und Objekten in Erkennungsaufgaben verwendet werden; (2) wie das visuelle System mit anderen sensorischen Modalitäten (Audition und Haptik) für die Erkennung interagiert und (3) wie der Kontext, in dem diese Reize gelernt werden, die Erkennung beeinflusst. Verhaltensexperimente, bildgebende Verfahren, Computergrafik und Computeranimation werden verwendet, um diese Fragen anzugehen.

Main research areas

The role of dynamic information for object and face recognition:
Emotional expressions in general are rather easily recognized in static displays, but they represent only 10 percent of all human facial expressions. In contrast, we have shown that dynamic information conveyed in conversational expressions (eg., “I don’t understand” ) is essential for correct interpretations of those expressions [Kaulard, de la Rosa, Wallraven]. Another aspect of facial movements is how they help us to recognize the identity of a person. Facial movements of several persons were recorded and these movements were retargeted to avatar faces to investigate the role of motion idiosyncrasy for the recognition of face identity [Dobs]. Using functional brain imaging, we found that the increased response to moving compared to static faces is the result of both the fluid facial motion itself as well as the increase in static information [Schultz]. In the domain of dynamic objects, we could show that brain regions showed neural activity varying in parallel with the degree to which a single moving dot was perceived as inanimate or animate, suggesting that biological motion processing and detection of animacy are implemented in different brain areas [Schultz].
 
The role of the observer's actions on perception:
View generalization for faces was tested in a virtual setup that allowed participants to move freely and collect actively numerous views of sitting and standing avatars that they encountered. Despite that, view generalization along the pitch axis did not occur for avatar faces, indicating that view dependency is not the consequence of passive viewing [I. Bülthoff]. Furthermore, data collected from active observers manipulating virtual objects revealed that observers rely on non-accidental properties of the objects (i.e., symmetry and elongation) to select views that they used for subsequent recognition [Chuang, Wallraven].
 
Cross-modal perception:
Using haptic objects (plastic 3D face masks and sea shells) we have shown that training in one sensory modality can transfer to another modality, resulting in improved discrimination performance [Dopjans, GaißertWallraven]. This transfer indicates that vision and touch share similar representations. Furthermore, we have shown for the first time that categorical perception also occurs in haptic shape perception. Participants performed a classification and discrimination task using a continuum of morphed plastic objects. A brief training altered haptic representations of shape resulting in categorical perception.
 
Populations with atypical perception:
In accordance with previous studies, we have shown that Korean observers fixate faces differently than their German counterparts, but interestingly, despite those differences in gaze movements, we found no effect of own-face expertise in terms of accuracy performance when the tasks did not involve memorizations of facial identity [I. Bülthoff]. We have started to investigate a large population of congenital Prosopagnosics. Preliminary results indicate that they are less sensitive to configural than to featural changes in faces that were modified parametrically. The use of a parametric stimulus space will allow a sensitive and precise quantitative description of their deficits [Esins]. Using parametric motion stimuli varying in complexity from simple translational motion to interacting shapes, we found that people with Autism Spectrum Disorder show deficits in assessing social interactions between moving objects [Schultz]. Faces were also used in a project to give a definite answer to the controversy about whether faces are perceived categorically in terms of their sex or not. Using our face database and the morphable model to manipulate the face stimuli to obtain very similar faces in terms of perceived masculinity and feminity allowed us to show that sex is not perceived categorically [Armann].


Methods and Stimuli

Example stimuli used in our group. A: static image from a conversational facial expression (“don’t know”). B: Asian version of a Caucasian face. C: avatar face animated by the smile of a real person. D: 3D-printout of a face of our 3D face database. E: artificial symmetric object. F: Trajectory of a simulated fly. G: 3D-printout of a parametrically-controlled shell object. Stimuli B, C, D, E, F and G can be parametrically modified.
Dynamic faces are recorded or created using the MPI VideoLab, as well as our in-house facial animation software developed by the Cognitive Engineering group [Curio].
 
Active behavior research is using both body-tracking technology (VICON MX) and immersive head-mounted visual displays.
 
Real volumetric objects for haptic and vision research are produced with 3D printing technology.
 
Virtual reality setups are used to present faces and objects embedded in scenes and allow active manipulations of objects.
 
Furthermore, classical psychophysical techniques, functional brain imaging  [Siemens 3T at Magnetic Resonance Center] and eye tracking methods are used in the projects of the group.

Collaborations

The Cognitive Neuroimaging group [Armann, Schultz], the Physiology of Cognitive Processes department [Schultz], the Cognitive Engineering group [Dobs, Chuang] and the PAVE group [I. Bülthoff] were major partners in the development and the execution of many projects. Other collaborators reside outside of the institute: e.g., Ian M Thornton (UK), Karin S Pilz (Switzerland), Nicole David (Germany), Quoc Vuong (UK). Several projects of the group were part of the EU-funded project POETICON.

Publications

For a full list of publications by RecCat members, click here

Ausgewählte Publikationen

Gaissert N, Bülthoff HH und Wallraven C (September-2011) Similarity and categorization: From vision to touch Acta Psychologica 138(1) 219-230.
CiteID: GaissertBW2011
de la Rosa S, Choudhery RN und Chatziastros A (Februar-2011) Visual object detection, categorization, and identification tasks are associated with different time courses and sensitivities Journal of Experimental Psychology: Human Perception and Performance 37(1) 38-47.
CiteID: delaRosaCC2011
Gaissert N, Wallraven C und Bülthoff HH (September-2010) Visual and Haptic Perceptual Spaces Show High Similarity in Humans Journal of Vision 10(11:2) 1-20.
CiteID: 6643
Schultz J (März-2010) Brain Imaging: Decoding Your Memories Current Biology 20(6) R269-R271.
CiteID: 6402
Dopjans L, Wallraven C und Bülthoff HH (Oktober-2009) Cross-Modal Transfer in Visual and Haptic Face Recognition IEEE Transactions on Haptics 200(4) 236-240.
CiteID: 5824
Armann R und Bülthoff I (Juli-2009) Gaze behavior in face comparison: The roles of sex, task, and symmetry Attention, Perception and Psychophysics 71(5) 1107-1126.
CiteID: 4689
Schultz J und Lennert T (Mai-2009) BOLD signal in intraparietal sulcus covaries with magnitude of implicitly driven attention shifts NeuroImage 45(4) 1314-1328.
pdfCiteID: 5679
Schultz J und Pilz KS (April-2009) Natural facial motion enhances cortical responses to faces Experimental Brain Research 194(3) 465-475.
pdfCiteID: 5678
Schultz J, Chuang L und Vuong QC (Juni-2008) A dynamic object-processing network: Metric shape discrimination of dynamic objects by activation of occipito-temporal, parietal and frontal cortex Cerebral Cortex 18(6) 1302-1313.
CiteID: 4686
Chuang L, Vuong QC, Thornton IM und Bülthoff HH (Mai-2006) Recognising novel deforming objects Visual Cognition 14(1) 85-88.
CiteID: 3770
Bülthoff I und Newell F (Oktober-2004) Categorical perception of sex occurs in familiar but not unfamiliar faces. Visual Cognition 11(7) 823-855.
pdfCiteID: 2385

Export als:
BibTeX, XML, pubman, Edoc, RTF
Last updated: Montag, 04.05.2015