Recognition & Categorization

Recognition & Categorization

We can easily recognize and categorize objects at different levels depending on task requirements. An animal can be recognized as belonging to a category such as “a dog” (categorization) or as “my dog Bashi” (identification). Among all categories of objects, faces constitute a very special class because of their social importance and their high intra-group similarity. Therefore, the RECCAT group mainly focuses on the perception of faces, with an added interest for the perception of human bodies and other objects.

Goals

The long-standing goal of our group is to unravel the mechanisms underlying recognition and categorization of faces, bodies and objects. Our research focuses on three main aspects: (1) To investigate recognition and categorization not only in traditional settings (static images shown on a computer screen) but also in more naturalistic scenarios where faces and human figures are moving or where the observers are actively exploring virtual environments. (2) To investigate perception in other populations, such as those with different cultural backgrounds (e.g. Koreans) or lacking specific skills (e.g. prosopagnosics), and more generally to investigate the role of expertise and cultural background in face recognition. (3) To investigate the brain processes involved when we recognize or categorize faces, bodies or object surfaces.

Main research areas

Here we present some recent and on-going projects that illustrate five of our research directions.

1. Perceiving static and animated faces and bodies:

Visual periphery plays a more important role in daily life than merely triggering gaze saccades to events in our environment [Fademrecht].
We used life-size animated and static human avatars to investigate action recognition in central and far peripheral vision. Our results have shown that action recognition remains extremely accurate, even for actions presented far away from fixation. In sum, our studies have revealed that the recognition abilities of peripheral vision have been underestimate.

Moving faces are processed holistically [Zhao & Bülthoff].
Holistic processing denotes the tendency to perceive objects as indecomposable wholes. We tested participants for holistic processing with static and moving faces. Their responses demonstrate that both types of faces are perceived holistically.

Active exploration of avatars in virtual reality and its concomitant dynamic experience of the faces to be remembered leads to enhanced face recognition [Bülthoff et al].
We tested face recognition performance of active and passive participants. The active participants explored a virtual room filled with avatars, while passive participants viewed dynamic or static renditions of these explorations. Active participants and passive participants viewing dynamic renderings of the faces to learn displayed more robust face recognition.

2. Multisensory representation of persons

Distinctive (out of the ordinary) voices can prime the recognition of their paired faces better than typical (ordinary) voices [Bülthoff & Newell].
We paired ordinary faces to distinctive and ordinary voices or sounds during a learning session. Afterwards, priming was found for voices, but not for sounds and the best primes were distinctive voices. Our findings suggest an early and specific convergence of voices and faces in multisensory representation of persons.

3. Holistic processing of faces, bodies and objects

Static faces and line patterns are perceived holistically [Zhao et al]
Holistic processing—the tendency to perceive objects as indecomposable wholes—has long been viewed as a process specific to faces or objects of expertise. Our experiments show that unknown line patterns are processed as holistically as faces without any training.

Are faces, objects and bodies processed holistically in common brain areas? [Foster et al]
We used visual stimuli that consisted of faces, line patterns and bodies in a typical task testing for holistic processing (the full composite task). The brain activity of participants was recorded while they performed this task. Region of interest and whole-brain analyses will be used to find out whether there are common holistically-involved areas activated during this task.

4. Perception of familiar and unfamiliar faces

Not all aspects of very familiar faces are represented equally precisely [Bülthoff & Zhao]
We modified personally familiar faces to investigate what aspects of familiar faces we remember best. Modifications of the race and sex of familiar faces were not as easily discriminated as modifications related to the identity of a face.

5. Tactile perception

Neural representations of perceived roughness [Kim et al]
Surfaces with five levels of roughness were presented to participants visually, tactilely or both while their brain activity was recorded. Multi-voxel pattern analysis shows that roughness intensity information could be decoded independent to the stimulus types.

Methods and Stimuli

Some of the stimuli used in our projects. A: Line patterns and faces in a composite task. B: Face and bodies used in brain imaging. C: Virtual environment. D: Original face A and morphs of that face mixed with either person B or person C. E: Various surface stimuli were probed with the hand to investigate tactile perception. In addition to classical psychophysical techniques, we use functional brain imaging[Siemens 3T at Magnetic Resonance Center] and eye tracking methods to investigate the processes underlying recognition and categorization. In the projects of the group, the use of moving stimuli (dynamic faces can be recorded or created using the MPI VideoLab) and/or the combination of classical psychophysical techniques with virtual reality setups using both body-tracking technology (VICON MX), immersive head-mounted visual displays and large screens allow for more naturalistic test environment.

Collaborations

Collaborations within the Department
Betty Mohler (Space and Body Perception)

Stephan de la Rosa (Social and Spatial Cognition)

External Academic Collaborations 

Go to Editor View