FademrechtBd20173LFademrechtIBülthoffSde la Rosa2017-06-0013510–15Recognizing actions of others across the whole visual field is required for social interactions. In a previous study, we have shown that recognition is very good even when life size avatars who were facing the observer carried out actions (e.g. waving) were presented very far away from the fovea (Fademrecht, Bülthoff, & de la Rosa, 2016). We explored the possibility whether this remarkable performance was owed to life size avatars facing the observer, which - according to some social cognitive theories (e.g. Schilbach et al., 2013) - could potentially activate different social perceptual processes as profile facing avatars. Participants therefore viewed a life-size stick-figure avatar that carried out motion-captured social actions (greeting actions: handshake, hugging, waving; attacking actions: slapping, punching and kicking) in frontal and profile view. Participants' task was to identify the actions as 'greeting' or as 'attack' or to assess the emotional valence of the actions. While recognition accuracy for frontal and profile views did not differ, reaction times were significantly faster in general for profile views (i. e. the moving avatar was seen profile on) than for frontal views (i.e. the action was directed toward the observer). Our results suggest that the remarkable well action recognition performance in the visual periphery was not owed to a more socially engaging front facing view. Although action recognition seems to depend on viewpoint, action recognition in general remains remarkable accurate even far into the visual periphery.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-10Action recognition is viewpoint-dependent in the visual periphery1501715422ZhaoB2016_43MZhaoIBülthoff2017-02-00Epub aheadHumans’ face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability—holistic face processing—remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based face processing by manipulating the presence of facial motion during study and at test in a composite face task. The results showed that rigidly moving faces were processed as holistically as static faces (Experiment 1). Holistic processing of moving faces persisted whether facial motion was presented during study, at test, or both (Experiment 2). Moreover, when faces were inverted to eliminate the contributions of both an upright face template and observers’ expertise with upright faces, rigid facial motion facilitated holistic face processing (Experiment 3). Thus, holistic processing represents a general principle of face perception that applies to both static and dynamic faces, rather than being limited to static faces. These results support an emerging view that both perceiver-based and face-based factors contribute to holistic face processing, and they offer new insights on what underlies holistic face processing, how information supporting holistic face processing interacts with each other, and why facial motion may affect face recognition and holistic face processing differently.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2017/ZhaoBulthoff_JEPLMC_2016.pdfpublished0Holistic Processing of Static and Moving Faces1501715422DobsBS20163KDobsIBülthoffJSchultz2016-09-0034301619Facial movements convey information about many social cues, including identity. However, how much information about a person’s identity is conveyed by different kinds of facial movements is unknown. We addressed this question using a recent motion capture and animation system, with which we animated one avatar head with facial movements of three types: (1) emotional, (2) emotional in social interaction and (3) conversational, all recorded from several actors. In a delayed match-to-sample task, observers were best at matching actor identity across conversational movements, worse with emotional movements in social interactions, and at chance level with emotional facial expressions. Model observers performing this task showed similar performance profiles, indicating that performance variation was due to differences in information content, rather than processing. Our results suggest that conversational facial movements transmit more dynamic identity information than emotional facial expressions, thus suggesting different functional roles and processing mechanisms for different types of facial motion.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published8Identity information content depends on the type of facial movement1501715422ZhaoBB20153MZhaoHHBülthoffIBülthoff2016-04-00442584597Faces are processed holistically, so selective attention to 1 face part without any influence of the others often fails. In this study, 3 experiments investigated what type of facial information (shape or surface) underlies holistic face processing and whether generalization of holistic processing to nonexperienced faces requires extensive discrimination experience. Results show that facial shape information alone is sufficient to elicit the composite face effect (CFE), 1 of the most convincing demonstrations of holistic processing, whereas facial surface information is unnecessary (Experiment 1). The CFE is eliminated when faces differ only in surface but not shape information, suggesting that variation of facial shape information is necessary to observe holistic face processing (Experiment 2). Removing 3-dimensional (3D) facial shape information also eliminates the CFE, indicating the necessity of 3D shape information for holistic face processing (Experiment 3). Moreover, participants show similar holistic processing for faces with and without extensive discrimination experience (i.e., own- and other-race faces), suggesting that generalization of holistic processing to nonexperienced faces requires facial shape information, but does not necessarily require further individuation experience. These results provide compelling evidence that facial shape information underlies holistic face processing. This shape-based account not only offers a consistent explanation for previous studies of holistic face processing, but also suggests a new ground-in addition to expertise-for the generalization of holistic processing to different types of faces and to nonface objects.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published13A shape-based account for holistic face processing1501715422FademrechtBd20163LFademrechtIBülthoffSde la Rosa2016-02-003:3316114Recognizing whether the gestures of somebody mean a greeting or a threat is crucial for social interactions. In real life, action recognition occurs over the entire visual field. In contrast, much of the previous research on action recognition has primarily focused on central vision. Here our goal is to examine what can be perceived about an action outside of foveal vision. Specifically, we probed the valence as well as first level and second level recognition of social actions (handshake, hugging, waving, punching, slapping, and kicking) at 0° (fovea/fixation), 15°, 30°, 45°, and 60° of eccentricity with dynamic (Experiment 1) and dynamic and static (Experiment 2) actions. To assess peripheral vision under conditions of good ecological validity, these actions were carried out by a life-size human stick figure on a large screen. In both experiments, recognition performance was surprisingly high (more than 66% correct) up to 30° of eccentricity for all recognition tasks and followed a nonlinear decline with increasing eccentricities.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published13Action recognition in the visual periphery1501715422ZhaoBB2015_23MZhaoHHBülthoffIBülthoff2016-02-00227213222Holistic processing—the tendency to perceive objects as indecomposable wholes—has long been viewed as a process specific to faces or objects of expertise. Although current theories differ in what causes holistic processing, they share a fundamental constraint for its generalization: Nonface objects cannot elicit facelike holistic processing in the absence of expertise. Contrary to this prevailing view, here we show that line patterns with salient Gestalt information (i.e., connectedness, closure, and continuity between parts) can be processed as holistically as faces without any training. Moreover, weakening the saliency of Gestalt information in these patterns reduced holistic processing of them, which indicates that Gestalt information plays a crucial role in holistic processing. Therefore, holistic processing can be achieved not only via a top-down route based on expertise, but also via a bottom-up route relying merely on object-based information. The finding that facelike holistic processing can extend beyond the domains of faces and objects of expertise poses a challenge to current dominant theories.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published9Beyond Faces and Expertise: Facelike Holistic Processing of Nonface Objects in the Absence of Expertise1501715422DahlRBC20163CDDahlMJRaschIBülthoffC-CCheng2016-02-0020247619A face recognition system ought to read out information about the identity, facial expression and invariant properties of faces, such as sex and race. A current debate is whether separate neural units in the brain deal with these face properties individually or whether a single neural unit processes in parallel all aspects of faces. While the focus of studies has been directed toward the processing of identity and facial expression, little research exists on the processing of invariant aspects of faces. In a theoretical framework we tested whether a system can deal with identity in combination with sex, race or facial expression using the same underlying mechanism. We used dimension reduction to describe how the representational face space organizes face properties when trained on different aspects of faces. When trained to learn identities, the system not only successfully recognized identities, but also was immediately able to classify sex and race, suggesting that no additional system for the processing of invariant properties is needed. However, training on identity was insufficient for the recognition of facial expressions and vice versa. We provide a theoretical approach on the interconnection of invariant facial properties and the separation of variant and invariant facial properties.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published8Integration or separation in the processing of facial properties: a computational view1501715422EsinsSSKB20163JEsinsJSchultzCStemperIKennerknechtIBülthoff2016-01-0017137Congenital prosopagnosia, the innate impairment in recognizing faces, is a very heterogeneous disorder with different phenotypical manifestations. To investigate the nature of prosopagnosia in more detail, we tested 16 prosopagnosics and 21 controls with an extended test battery addressing various aspects of face recognition. Our results show that prosopagnosics exhibited significant impairments in several face recognition tasks: impaired holistic processing (they were tested amongst others with the Cambridge Face Memory Test (CFMT)) as well as reduced processing of configural information of faces. This test battery also revealed some new findings. While controls recognized moving faces better than static faces, prosopagnosics did not exhibit this effect. Furthermore, prosopagnosics had significantly impaired gender recognition—which is shown on a groupwise level for the first time in our study. There was no difference between groups in the automatic extraction of face identity information or in object recognition as tested with the Cambridge Car Memory Test. In addition, a methodological analysis of the tests revealed reduced reliability for holistic face processing tests in prosopagnosics. To our knowledge, this is the first study to show that prosopagnosics showed a significantly reduced reliability coefficient (Cronbach’s alpha) in the CFMT compared to the controls. We suggest that compensatory strategies employed by the prosopagnosics might be the cause for the vast variety of response patterns revealed by the reduced test reliability. This finding raises the question whether classical face tests measure the same perceptual processes in controls and prosopagnosics.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published36Face Perception and Test Reliabilities in Congenital Prosopagnosia in Seven Tests1501715422BulthoffN20153IBülthoffFNNewell2015-04-001379–21Several studies have provided evidence in favour of a norm-based representation of faces in memory. However, such models have hitherto failed to take account of how other person-relevant information affects face recognition performance. Here we investigated whether distinctive or typical auditory stimuli affect the subsequent recognition of previously unfamiliar faces and whether the type of auditory stimulus matters. In this study participants learned to associate either unfamiliar distinctive and typical voices or unfamiliar distinctive and typical sounds to unfamiliar faces. The results indicated that recognition performance was better to faces previously paired with distinctive than with typical voices but we failed to find any benefit on face recognition when the faces were previously associated with distinctive sounds. These findings possibly point to an expertise effect, as faces are usually associated to voices. More importantly, it suggests that the memory for visual faces can be modified by the perceptual quality of related vocal information and more specifically that facial distinctiveness can be of a multi-sensory nature. These results have important implications for our understanding of the structure of memory for person identification.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-9Distinctive voices enhance the visual recognition of unfamiliar faces1501715422ZhaoHB2014_23MZhaoWGHaywardIBülthoff2014-12-0010561–69Face recognition, holistic processing, and processing of configural and featural facial information are known to be influenced by face race, with better performance for own- than other-race faces. However, whether these various other-race effects (OREs) arise from the same underlying mechanisms or from different processes remain unclear. The present study addressed this question by measuring the OREs in a set of face recognition tasks, and testing whether these OREs are correlated with each other. Participants performed different tasks probing (1) face recognition, (2) holistic processing, (3) processing of configural information, and (4) processing of featural information for both own- and other-race faces. Their contact with other-race people was also assessed with a questionnaire. The results show significant OREs in tasks testing face memory and processing of configural information, but not in tasks testing either holistic processing or processing of featural information. Importantly, there was no cross-task correlation between any of the measured OREs. Moreover, the level of other-race contact predicted only the OREs obtained in tasks testing face memory and processing of configural information. These results indicate that these various cross-race differences originate from different aspects of face processing, in contrary to the view that the ORE in face recognition is due to cross-race differences in terms of holistic processing.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-61Holistic processing, contact, and the other-race effect in face recognition1501715422LeeLLPCWBC20143I-SLeeA-RLeeHLeeH-JParkS-YChungCWallravenIBülthoffYChae2014-12-00619680686Acne vulgaris is a common inflammatory disease that manifests on the face and affects appearance. In general, facial acne has a wide-ranging negative impact on the psychosocial functioning of acne sufferers and leaves physical and emotional scars. In the present study, we investigated whether patients with acne vulgaris demonstrate enhanced psychological bias when assessing the attractiveness of faces with acne symptoms and whether they devote greater selective attention to acne lesions than to acne-free (control) individuals. Participants viewed images of faces under two different skin (acne vs. acne-free) and emotional facial expression (happy and neutral) conditions. They rated the attractiveness of the faces, and the time spent fixating on the acne lesions was recorded with an eye tracker. We found that the gap in perceived attractiveness between acne and acne-free faces was greater for acne sufferers. Furthermore, patients with acne fixated longer on facial regions exhibiting acne lesions than did control participants irrespective of the facial expression depicted. In summary, patients with acne have a stronger attentional bias for acne lesions and focus more on the skin lesions than do those without acne. Clinicians treating the skin problems of patients with acne should consider these psychological and emotional scars.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published6Psychological distress and attentional bias toward acne lesions in patients with acne1501715422EsinsSWB20143JEsinsJSchultzCWallravenIBülthoff2014-09-007598114Congenital prosopagnosia, an innate impairment in recognizing faces, as well as the other-race effect, a disadvantage in recognizing faces of foreign races, both affect face recognition abilities. Are the same face processing mechanisms affected in both situations? To investigate this question, we tested three groups of 21 participants: German congenital prosopagnosics, South Korean participants and German controls in three different tasks involving faces and objects. First we tested all participants on the Cambridge Face Memory Test in which they had to recognize Caucasian target faces in a 3-alternative-forced-choice task. German controls performed better than Koreans who performed better than prosopagnosics. In the second experiment, participants rated the similarity of Caucasian faces that differed parametrically in either features or second-order relations (configuration). Prosopagnosics were less sensitive to configuration changes than both other groups. In addition, while all groups were more sensitive to changes in features than in configuration, this difference was smaller in Koreans. In the third experiment, participants had to learn exemplars of artificial objects, natural objects, and faces and recognize them among distractors of the same category. Here prosopagnosics performed worse than participants in the other two groups only when they were tested on face stimuli. In sum, Koreans and prosopagnosic participants differed from German controls in different ways in all tests. This suggests that German congenital prosopagnosics perceive Caucasian faces differently than do Korean participants. Importantly, our results suggest that different processing impairments underlie the other-race effect and congenital prosopagnosia.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published13Do congenital prosopagnosia and the other-race effect affect the same face recognition mechanisms?1501715422EsinsSBK20133JEsinsJSchultzIBülthoffIKennerknecht2014-09-00517239240A woman in her early 40s with congenital prosopagnosia and attention deficit hyperactivity disorder observed for the first time sudden and extensive improvement of her face recognition abilities, mental imagery, and sense of navigation after galactose intake. This effect of galactose on prosopagnosia has never been reported before. Even if this effect is restricted to a subform of congenital prosopagnosia, galactose might improve the condition of other prosopagnosics. Congenital prosopagnosia, the inability to recognize other people by their face, has extensive negative impact on everyday life. It has a high prevalence of about 2.5%. Monosaccharides are known to have a positive impact on cognitive performance. Here, we report the case of a prosopagnosic woman for whom the daily intake of 5 g of galactose resulted in a remarkable improvement of her lifelong face blindness, along with improved sense of orientation and more vivid mental imagery. All these improvements vanished after discontinuing galactose intake. The self-reported effects of galactose were wide-ranging and remarkably strong but could not be reproduced for 16 other prosopagnosics tested. Indications about heterogeneity within prosopagnosia have been reported; this could explain the difficulty to find similar effects in other prosopagnosics. Detailed analyses of the effects of galactose in prosopagnosia might give more insight into the effects of galactose on human cognition in general. Galactose is cheap and easy to obtain, therefore, a systematic test of its positive effects on other cases of congenital prosopagnosia may be warranted.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published1Galactose uncovers face recognition and mental images in congenital prosopagnosia: The first case report1501715422ZhaoHB20143MZhaoWGHaywardIBülthoff2014-08-009:614113Memory of own-race faces is generally better than memory of other-races faces. This other-race effect (ORE) in face memory has been attributed to differences in contact, holistic processing, and motivation to individuate faces. Since most studies demonstrate the ORE with participants learning and recognizing static, single-view faces, it remains unclear whether the ORE can be generalized to different face learning conditions. Using an old/new recognition task, we tested whether face format at encoding modulates the ORE. The results showed a significant ORE when participants learned static, single-view faces (Experiment 1). In contrast, the ORE disappeared when participants learned rigidly moving faces (Experiment 2). Moreover, learning faces displayed from four discrete views produced the same results as learning rigidly moving faces (Experiment 3). Contact with other-race faces was correlated with the magnitude of the ORE. Nonetheless, the absence of the ORE in Experiments 2 and 3 cannot be readily explained by either more frequent contact with other-race faces or stronger motivation to individuate them. These results demonstrate that the ORE is sensitive to face format at encoding, supporting the hypothesis that relative involvement of holistic and featural processing at encoding mediates the ORE observed in face memory.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published12Face format at encoding affects the other-race effect in face memory1501715422BrielmannBA20143AABrielmannIBülthoffRArmann2014-07-00100105–112Race categorization of faces is a fast and automatic process and is known to affect further face processing profoundly and at earliest stages. Whether processing of own- and other-race faces might rely on different facial cues, as indicated by diverging viewing behavior, is much under debate. We therefore aimed to investigate two open questions in our study: 1) Do observers consider information from distinct facial features informative for race categorization or do they prefer to gain global face information by fixating the geometrical center of the face? 2) Does the fixation pattern, or, if facial features are considered relevant, do these features differ between own- and other-race faces? We used eye tracking to test where European observers look when viewing Asian and Caucasian faces in a race categorization task. Importantly, in order to disentangle centrally located fixations from those towards individual facial features, we presented faces in frontal, half-profile and profile views. We found that observers showed no general bias towards looking at the geometrical center of faces, but rather directed their first fixations towards distinct facial features, regardless of face race. However, participants looked at the eyes more often in Caucasian faces than in Asian faces, and there were significantly more fixations to the nose for Asian compared to Caucasian faces. Thus, observers rely on information from distinct facial features rather than facial information gained by centrally fixating the face. To what extent specific features are looked at is determined by the face’s race.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-105Looking at faces from different angles: Europeans fixate different features in Asian and Caucasian faces1501715422DobsBBVCS20143KDobsIBülthoffMBreidtQCVuongCCurioJSchultz2014-07-0010078–87A great deal of perceptual and social information is conveyed by facial motion. Here, we investigated observers’ sensitivity to the complex spatio-temporal information in facial expressions and what cues they use to judge the similarity of these movements. We motion-captured four facial expressions and decomposed them into time courses of semantically meaningful local facial actions (e.g., eyebrow raise). We then generated approximations of the time courses which differed in the amount of information about the natural facial motion they contained, and used these and the original time courses to animate an avatar head. Observers chose which of two animations based on approximations was more similar to the animation based on the original time course. We found that observers preferred animations containing more information about the natural facial motion dynamics. To explain observers’ similarity judgments, we developed and used several measures of objective stimulus similarity. The time course of facial actions (e.g., onset and peak of eyebrow raise) explained observers’ behavioral choices better than image-based measures (e.g., optic flow). Our results thus revealed observers’ sensitivity to changes of natural facial dynamics. Importantly, our method allows a quantitative explanation of the perceived similarity of dynamic facial expressions, which suggests that sparse but meaningful spatio-temporal cues are used to process facial motion.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-78Quantifying human sensitivity to spatio-temporal information in dynamic faces1501715422MichelRBHV20133CMichelBRossionIBülthoffWGHaywardQCVuong2013-12-009-102112021223Faces from another race are generally more difficult to recognize than faces from one's own race. However, faces provide multiple cues for recognition and it remains unknown what are the relative contribution of these cues to this “other-race effect”. In the current study, we used three-dimensional laser-scanned head models which allowed us to independently manipulate two prominent cues for face recognition: the facial shape morphology and the facial surface properties (texture and colour). In Experiment 1, Asian and Caucasian participants implicitly learned a set of Asian and Caucasian faces that had both shape and surface cues to facial identity. Their recognition of these encoded faces was then tested in an old/new recognition task. For these face stimuli, we found a robust other-race effect: Both groups were more accurate at recognizing own-race than other-race faces. Having established the other-race effect, in Experiment 2 we provided only shape cues for recognition and in Experiment 3 we provided only surface cues for recognition. Caucasian participants continued to show the other-race effect when only shape information was available, whereas Asian participants showed no effect. When only surface information was available, there was a weak pattern for the other-race effect in Asians. Performance was poor in this latter experiment, so this pattern needs to be interpreted with caution. Overall, these findings suggest that Asian and Caucasian participants rely differently on shape and surface cues to recognize own-race faces, and that they continue to use the same cues for other-race faces, which may be suboptimal for these faces.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published21The contribution of shape and surface information in the other-race face effect1501715422ZhaoB2013_23MZhaoIBülthoff2013-10-00621722725nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published3The other-race effect in face recognition is sensitive to face format at encoding1501715422GaissertWFB20123NGaissertSWaterkampRWFlemingIBülthoff2012-08-008717Categorization and categorical perception have been extensively studied, mainly in vision and audition. In the haptic domain, our ability to categorize objects has also been demonstrated in earlier studies. Here we show for the first time that categorical perception also occurs in haptic shape perception. We generated a continuum of complex shapes by morphing between two volumetric objects. Using similarity ratings and multidimensional scaling we ensured that participants could haptically discriminate all objects equally. Next, we performed classification and discrimination tasks. After a short training with the two shape categories, both tasks revealed categorical perception effects. Training leads to between-category expansion resulting in higher discriminability of physical differences between pairs of stimuli straddling the category boundary. Thus, even brief training can alter haptic representations of shape. This suggests that the weights attached to various haptic shape features can be changed dynamically in response to top-down information about class membership.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published6Haptic Categorical Perception of Shape1501715422Bulthoff2012_163IBülthoff2012-07-00741881882nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published1Review: L'empreinte Des Sens1501715422ArmannB20113RArmannIBülthoff2012-06-006369–80Categorical perception (CP) is a fundamental cognitive process that enables us to sort similar objects in the world into meaningful categories with clear boundaries between them. CP has been found for high-level stimuli like human faces, more precisely, for the perception of face identity, expression and ethnicity. For sex however, which represents another important and biologically relevant dimension of human faces, results have been equivocal so far. Here, we reinvestigate CP for sex using newly created face stimuli to control two factors that to our opinion might have influenced the results in earlier studies. Our new stimuli are (a) derived from single face identities, so that changes of sex are not confounded with changes of identity information, and (b) “normalized” in their degree of maleness and femaleness, to counteract natural variations of perceived masculinity and femininity of faces that might obstruct evidence of categorical perception. Despite careful normalization, we did not find evidence of CP for sex using classical test procedures, unless participants were specifically familiarized with the face identities before testing. These results support the single-route hypothesis, stating that sex and identity information in faces are not processed in parallel, in contrast to what was suggested in the classical Bruce and Young model of face perception.
Besides, interestingly, our participants show a consistent bias, before and after perceptual normalization of the male–female range of the test morph continua, to judge faces as male rather than female.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-69Male and female faces are only perceived categorically when linked to familiar identities – And when in doubt, he is a male150171542246893RArmannIBülthoff2009-07-0057111071126Knowing where people look on a face provides an objective insight into the information entering the visual system and into cognitive processes involved in face perception. In the present study, we recorded eye movements of human participants while they compared two faces presented simultaneously. Observers‘ viewing behavior and performance was examined in two tasks of parametrically varying difficulty, using two types of face stimuli (sex morphs and identity morphs). The frequency, duration, and temporal sequence of fixations on previously defined areas of interest in the faces were analyzed. As was expected, viewing behavior and performance varied with difficulty. Interestingly, observers compared predominantly the inner halves of the face stimulia result inconsistent with the general left-hemiface bias reported for single faces. Furthermore, fixation patterns and performance differed between tasks, independently of stimulus type. Moreover, we found differences in male and female participants‘ viewing behaviors, but only when the sex of the face stimuli was task relevant.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published19Gaze behavior in face comparison: The roles of sex, task, and symmetry150171542223853IBülthoffFNewell2004-10-00711823855We investigated whether male and female faces are discrete categories at the perceptual level and whether familiarization plays a role in the categorical perception of sex. We created artificial sex continua between male and female faces using a 3‐D morphing algorithm and used classical categorization and discrimination tasks to investigate categorical perception of sex. In Experiments 1 and 2, 3‐D morphs were computed between individual male and female faces. In Experiments 3 and 4, we used face continua in which only the sex of the facial features changed, while the identity characteristics of the facial features remained constant. When the faces were unfamiliar (Experiments 1 and 3), we failed to find evidence for categorical perception of sex. In Experiments 2 and 4, we familiarized participants with the individual face images by instructing participants to learn the names of the individuals in the endpoint face images (Experiment 2) or to classify face images along a continuum as male or female using a feedback procedure (Experiment 4). In both these experiments we found evidence for a categorical effect for sex after familiarization. Our findings suggest that despite the importance of face perception in our everyday world, sex information present in faces is not naturally perceived categorically. Categorical perception of sex was only found after training with the face stimulus set. Our findings have implications for functional models of face processing which suggest two independent processing routes, one for facial expression and one for identity: We propose that sex perception is closely linked with the processing of facial identity.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/categorical_perception_of_sex_occurs_in_familiar_but_not_infamiliar_faces_2385.pdfpublished32Categorical perception of sex occurs in familiar but not unfamiliar faces.15017154221843SEdelmanHHBülthoffIBülthoff1999-01-00112107123To explore the nature of the representation space of 3D objects, we studied human performance in forced-choice categorization of objects composed of four geon-like parts emanating from a common center. Two categories were defined by prototypical objects, distinguished by qualitative properties of their parts (bulging vs waist-like limbs). Subjects were trained to discriminate between the two prototypes (shown briefly, from a number of viewpoints, in stereo) in a 1-interval forced-choice task, until they reached a 90% correct-response performance level. After training, in the first experiment, 11 subjects were tested on shapes obtained by varying the prototypical parameters both orthogonally (ORTHO) and in parallel (PARA) to the line connecting the prototypes in the parameter space. For the eight subjects who performed above chance, the error rate increased with the ORTHO parameter-space displacement between the stimulus and the corresponding prototype; the effect of the PARA displacement was weaker. Thus, the parameter-space location of the stimuli mattered more than the qualitative contrasts, which were always present. To find out whether both prototypes or just the nearest one to the test shape influenced the decision, in the second experiment we varied the similarity between the categories. Specifically, in the test stage trials the distance between the two prototypes could assume one of three values (FAR, INTERMEDIATE, and NEAR). For the 13 subjects who performed above chance, the error rate (on physically identical stimuli) in the NEAR condition was higher than in the other two conditions. The results of the two experiments contradict the prediction of theories that postulate exclusive reliance on qualitative contrasts, and support the notion of a representation space in which distances to more than one reference point or prototype are encoded (Edelman, 1998).nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/pdf184.pdfpublished16Effects of parametric manipulation of inter-stimulus similarity on 3D object categorization15017154221483IBülthoffHHBülthoffPSinha1998-07-0031254257The interaction between depth perception and object recognition has important implications for the nature of mental object representations and models of hierarchical organization of visual processing. It is often believed that the computation of depth influences subsequent high-level object recognition processes, and that depth processing is an early vision task that is largely immune to 'top-down' object-specific influences, such as object recognition. Here we present experimental evidence that challenges both these assumptions in the specific context of stereoscopic depth-perception. We have found that observers' recognition of familiar dynamic three- dimensional (3D) objects is unaffected even when the objects' depth structure is scrambled, as long as their two-dimensional (2D) projections are unchanged. Furthermore, the observers seem perceptually unaware of the depth anomalies introduced by scrambling. We attribute the latter result to a top-down recognition-based influence whereby expectations about a familiar object's 3D structure override the true stereoscopic information.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/pdf148.pdfpublished3Top-down influences on stereoscopic depth-perception15017154225763DKerstenDCKnillPMamassianIBülthoff1996-01-00656037931nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-31Illusory motion from shadows15017154228333HHBülthoffIBülthoff1987-03-001407152158Movement detection is one of the most elementary visual computations performed by vertebrates as well as invertebrates. However, comparatively little is known about the biophysical mechanisms underlying this computation. It has been proposed on both physiological1.8.21 and theoretical2.15.23 grounds that inhibition plays a crucial role in the directional selectivity of elementary movement detectors (EMDs). For the first time, we have studied electrophysiological and behavioral changes induced in flies after application of picrotoxinin, an antagonist of GABA. The results show that inhibitory interactions play an important role in movement detection in flies. Furthermore, our behavioral results suggest that the computation of object position is based primarily on movement detection.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/pdf833.pdfpublished6GABA-antagonist inverts movement and object detection in flies15017154238233HHBülthoffIBülthoff1987-02-00555313320The optomotor following response, a behavior based on movement detection was recorded in the fruitflyDrosophila melanogaster before and after the injection of picrotoxinin, an antagonist of the inhibitory neurotransmitter GABA. The directional selectivity of this response was transiently abolished or inverted after injection. This result is in agreement with picrotoxinin-induced modifications observed in electrophysiological activity of direction-selective cells in flies (Bülthoff and Schmid 1983; Schmid and Bülthoff, in preparation). Furthermore, walking and flying flies treated with picrotoxinin followed more actively motion from back to front instead of front to back as in normal animals. Since the difference in the responses to front to back and back to front motions is proposed to be the basis of fixation behavior in flies (Reichardt 1973) our results support this notion and are inconsistent with schemes explaining fixation by alternative mechanisms.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/combining_neuropharmacology_and_behavior_to_study_motion_detection_in_flies_823.pdfpublished7Combining Neuropharmacology and Behavior to Study Motion Detection in Flies150171542311283IBülthoff1986-03-002158195202Autoradiographs of the brains of the visual mutantsouter rhabdomeres absent JK84 (ora),small optic lobes KS58 (KS58) andno object fixation E B12 (B12) have been obtained by the deoxyglucose method. The patterns of metabolic activity in the optic lobes of the visually stimulated mutants is compared with that of similarly stimulated wildtype (WT) flies which was described in Part I of this work (Buchner et al. 1984b).
In the mutantKS58 the optomotor following response to movement is nearly normal despite a 40–45% reduction of volume in the visual neuropils, medulla and lobula complex. InB12 flies the volume of these neuropils and the optomotor response are reduced. In autoradiographs of both mutants the pattern of neuronal activity induced by stimulation with moving gratings does not differ substantially from that in the WT. It suggests that only neurons irrelevant to movement detection are affected by the mutation. However, in the lobula plate of someKS58 flies and in the second chiasma of allB12 flies, the pattern of metabolic activity differs from that observed in WT flies. Up to now no causal relation has been found between the modifications described in behaviour or anatomy and those observed in the labelling of these mutants.
In the ommatidia ofora flies the outer rhabdomeres are lacking while the central photoreceptors appear to be normal. Stimulus-specific labelling is absent in the visual neuropil of these mutants stimulated with movement or flicker. This result underlines the importance of the outer rhabdomeres for visual tasks, especially for movement detection.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7Deoxyglucose mapping of nervous activity induced in Drosophila brain by visual movement. 3. Outer rhabdomeres absent JK84, small optics lobes KS58 and no object fixation EB12, visual mutants.150171542311293VRodriguesIBülthoff1985-05-003-413183190High resolution of [3H]2-deoxyglucose labelling was obtained in autoradiographs of Drosophila brains after freeze-substitution in anhydrous acetone at −76°C. This method was applied to preparations which received visual, olfactory and mechanosensory stimulation. The autoradiographs were compared to those obtained after freeze-drying. Freeze-substitution, which has proved to be technically simple, rapid and inexpensive, yields a good quality of tissue preservation and hence is recommended for tissue dehydration prior to autoradiography.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7Freeze-substitution of Drososphila heads for subsequent 3H-2-deoxyglucose autoradiography150171542311273IBülthoffEBuchner1985-01-0011562534The pattern of visually induced local metabolic activity in the optic lobes of two structural mutants ofDrosophila melanogaster is compared with the corresponding wildtype pattern which has been reported in Part I of this work (Buchner et al. 1984b). Individualoptomotor-blind H31 (omb) flies lacking normal giant HS-neurons were tested behaviourally, and those with strongly reduced responses to visual movement were processed for 3H-deoxyglucose autoradiography. The distribution of metabolic activity in the optic lobes ofomb apparently does not differ substantially from that found in wildtype. In the mutantlobula plate-less N684 (lop) the small rudiment of the lobula plate which lacks many small-field input neurons does not show any stimulus-specific labelling. The data provide further support for the hypothesis that small-field input neurons to the lobula plate are the cellular substrate of the direction-specific labelling inDrosophila (see Buchner et al. 1984b).nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published9Deoxyglucose mapping of nervous activity induced in Drosophila brain by visual movement. 2. Optomotor blind H31 and lobula plate-less N684 visual mutants.150171542311263EBuchnerSBuchnerIBülthoff1984-07-004155471483Local metabolic activity was mapped in the brain ofDrosophila by the radioactive deoxyglucose technique. The distribution of label in serial autoradiographs allows us to draw the following conclusions concerning neuronal processing of visual movement information in the brain ofDrosophila.
1. The visual stimuli used (homogeneous flicker, moving gratings, reversing contrast gratings) cause only a small increase in metabolic activity in the first visual neuropil (lamina).
2. In the second visual neuropil (medulla) at least four layers respond to visual movement and reversing contrast gratings by increased metabolic activity; homogeneous flicker is less effective.
3. With the current autoradiographic resolution (2—3 μm) no directional selectivity can be detected in the medulla.
4. In the lobula, the anterior neuromere of the third visual neuropil, movement-specific activity is observed in three layers, two of which are more strongly labelled by ipsilateral front-to-back than by back-to-front movement.
5. In its posterior counterpart, the lobula plate, four movement-sensitive layers can be identified in which label accumulation specifically depends on the direction of the movement: Ipsilateral front-to-back movement labels a superficial anterior layer, back-to-front movement labels an inner anterior layer, upward movement labels an inner posterior layer and downward movement labels a superficial posterior layer.
6. A considerable portion of the stimulus-enhanced labelling of medulla and lobula complex is restricted to those columns which connect to the stimulated ommatidia. This retinotopic distribution of label suggests the involvement of movement-sensitive small-field neurons.
7. Certain axonal profiles connecting the lobula plate and the lateral posterior protocerebrum are labelled by ipsilateral front-to-back movement. Presumably different structures in the same region are labelled by ipsilateral downward movement. Conspicuously labelled foci and commissures in the central brain cannot yet be associated with a particular stimulus.
The results are discussed in the light of present anatomical and physiological knowledge of the visual movement detection system of flies.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published12Deoxyglucose mapping of nervous activity induced in Drosophila brain by visual movement. 1. Wildtype15017154232517MAPavlovaANSokolovIBülthoffQuebec, Canada1998-08-00314319nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published5Prime-orientation dependence in detection of camouflaged biological motion15017154223447MAPavlovaANSokolovIBülthoffPont-à-Mousson, France1998-07-006468nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published4Recovery of a priori known structure from biological motion1501715422BulthoffALB20152IBülthoffRGMArmannRKLeeHHBülthoffSpringerDordrecht, The Netherlands2015-00-00153165The other-race effect refers to the observation that we perform better in tasks involving faces of our own race compared to faces of a race we are not familiar with. This is especially interesting as from a biological perspective, the category “race” does in fact not exist (Cosmides L, Tooby J, Krurzban R, Trends Cogn Sci 7(4):173–179, 2003); visually, however, we do group the people around us into such categories. Usually, the other-race effect is investigated in memory tasks where observers have to learn and subsequently recognize faces of individuals of different races (Meissner CA, Brigham JC, Psychol Public Policy Law 7(1):3–35, 2001) but it has also been demonstrated in perceptual tasks where observers compare one face to another on a screen (Walker PM, Tanaka J, Perception 32(9):1117–1125, 2003). In all tasks (and primarily for technical reasons) the test faces differ in race and identity. To broaden our general understanding of the effect that the race of a face has on the observer, in the present study, we investigated whether an other-race effect is also observed when participants are confronted with faces that differ only in ethnicity but not in identity. To that end, using Asian and Caucasian faces and a morph algorithm (Blanz V, Vetter T, A morphable model for the synthesis of 3D faces. In: Proceedings of the 26th annual conference on Computer graphics and interactive techniques – SIGGRAPH’99, pp 187–194, 1999), we manipulated each original Asian or Caucasian face to generate face “race morphs” that shared the same identity but whose race appearance was manipulated stepwise toward the other ethnicity. We presented each Asian or Caucasian face pair (original face and a race morph) to Asian (South Korea) and Caucasian (Germany) participants who had to judge which face in each pair looked “more Asian” or “more Caucasian”. In both groups, participants did not perform better for same-race pairs than for other-race pairs. These results point to the importance of identity information for the occurrence of an other-race effect.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published12The Other-Race Effect Revisited: No Effect for Faces Varying in Race Only150171542238122IBülthoffFNNewellElsevierAmsterdam, Netherlands2006-10-00315325Although the perception of our world is experienced as effortless, the processes that underlie object recognition in the brain are often difficult to determine. In this article we review the effects of familiarity on the recognition of moving or static objects. In particular, we concentrate on exemplar-level stimuli such as walking humans, unfamiliar objects and faces. We found that the perception of these objects can be affected by their familiarity; for example the learned view of an object or the learned dynamic pattern can influence object perception. Deviations in the viewpoint from the familiar viewpoint, or changes in the temporal pattern of the objects can result in some reduction of efficiency in the perception of the object. Furthermore, more efficient sex categorization and cross-modal matching was found for familiar than for unfamiliar faces. In sum, we find that our perceptual system is organized around familiar events and that perception is most efficient with these learned events.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/Visual%20Perception_Part_1_315-325_middle_3812.pdfpublished10The role of familiarity in the recognition of static and dynamic objects150171542233512IBülthoffHHBülthoffHogrefeGöttingen, Germany2006-00-00165172nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/Handbuch_der_allgemeinen_Psychologie_165-172_middle_3351.pdfpublished7Objektwahrnehmung150171542211322IBülthoffHHBülthoffOxford University PressNew York, NY, USA2003-00-00146172In this chapter we will review experiments using both explicit and implicit tasks to investigate object recognition using familiar objects (faces), unusual renderings of familiar objects (point-light walker), and novel scenes. While it is unlikely that participants would have already seen the particular renderings of familiar objects used in an experiment, they have definitely seen similar objects. For this reason, unfamiliar objects are used in many experiments to circumvent the problem of uncontrolled variations in prior exposure to objects. Another reason for using unfamiliar objects is that they allow us precise control over the types of features that are available for discrimination. How our visual system represents familiar and unfamiliar three-dimensional objects for the purpose of recognition is a difficult and passionately discussed issue. At the theoretical level a key question that any representational scheme has to address is how much the internal model depends on the viewing parameters. We will present 2 types of models regarding this issue and also address the question of whether the recognition process is more analytic or more holistic.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published26Image-Based Recognition of Biological Motion, Scenes, and Objects150171542211302INicodEditions du Centre National de la Recherche ScientifiqueParis, France1984-00-00171175nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published4Mapping nervous activity in visual mutants of Drosophila melanogaster with the deoxyglucose method363746IBülthoffFNNewell2005-10-00151246IBülthoffHHBülthoffPSinha1997-02-00150446SEdelmanHHBülthoffIBülthoff1996-09-00SrismithZB20167DSrismithMZhaoIBülthoffBarcelona, Spain2016-09-01313People are good at recognising faces, particularly familiar faces. However, little is known about how precisely familiar faces are represented and how increasing familiarity improves the precision of face representation. Here we investigated the precision of face representation for two types of familiar faces: personally familiar faces (i.e. faces of colleagues) and visually familiar faces (i.e. faces learned from viewing photographs). For each familiar face, participants were asked to select the
original face among an array of faces, which varied from highly caricatured (þ50%) to highly anticaricatured (50%) along the facial shape dimension. We found that for personally familiar faces, participants selected the original faces more often than any other faces. In contrast, for visually familiar faces, the highly anti-caricatured (50%) faces were selected more often than others, including the original faces. Participants also favoured anti-caricatured faces more than caricatured
faces for both types of familiar faces. These results indicate that people form very precise representation for personally familiar faces, but not for visually familiar faces. Moreover, the more familiar a face is, the more its corresponding representation shifts from a region close to
the average face (i.e. anti-caricatured) to its veridical location in the face space.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-313Precise Representation of Personally, but not Visually, Familiar Faces1501715422ZhaoB2016_27MZhaoIBülthoffBarcelona, Spain2016-08-2927Unlike most everyday objects, faces are processed holistically—they tend to be perceived as indecomposable wholes instead of a collection of independent facial parts. While holistic face processing has been demonstrated with a variety of behavioral tasks, it is predominantly observed
with static faces. Here we investigated three questions about holistic processing of moving faces:
(1) are rigidly moving faces processed holistically? (2) does rigid motion reduces the magnitudes of holistic processing? and (3) does holistic processing persist when study and test faces differ in terms of facial motion? Participants completed two composite face tasks (using a complete design), one with static faces and the other with rigidly moving faces. We found that rigidly moving faces
are processed holistically. Moreover, the magnitudes of holistic processing effect observed for moving faces is similar to that observed for static faces. Finally, holistic processing still holds even when the study face is static and the test face is moving or vice versa. These results provide convincing evidence that holistic processing is a general face processing mechanism that applies to both static and moving faces. These findings indicate that rigid facial motion neither promotes partbased
face processing nor eliminates holistic face processing.
Funding: The study was supported by the Max Planck Society.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-27Holistic Processing of Static and Rigidly Moving Faces1501715422DobsBR20167KDobsIBülthoffLReddySt. Pete Beach, FL, USA2016-05-16925Integration of multiple sensory cues pertaining to the same object is essential for precise and accurate perception. The optimal strategy to estimate an object’s property is to weight sensory cues proportional to their relative reliability (i.e., the inverse of the variance). Recent studies showed that human observers apply this strategy when integrating low-level unisensory and multisensory signals, but evidence for high-level perception remains scarce. Here we asked if human observers optimally integrate high-level visual cues in a socially critical task, namely the recognition of a face. We therefore had subjects identify one of two previously learned synthetic facial identities (“Laura” and “Susan”) using facial form and motion.
Five subjects performed a 2AFC identification task (i.e., “Laura or Susan?”) based on dynamic face stimuli that systematically varied in the amount of form and motion information they contained about each identity (10% morph steps from Laura to Susan). In single-cue conditions one cue (e.g., form) was varied while the other (e.g., motion) was kept uninformative (50% morph). In the combined-cue condition both cues varied by the same amount. To assess whether subjects weight facial form and motion proportional to their reliability, we also introduced cue-conflict conditions in which both cues were varied but separated by a small conflict (±10%).
We fitted psychometric functions to the proportion of “Susan” choices pooled across subjects (fixed-effects analysis) for each condition. As predicted by optimal cue integration, the empirical combined variance was lower than the single-cue variances (p< 0.001, bootstrap test), and did not differ from the optimal combined variance (p>0.5). Moreover, no difference was found between empirical and optimal form and motion weights (p>0.5). Our data thus suggest that humans integrate high-level visual cues, such as facial form and motion, proportional to their reliability to yield a coherent percept of a facial identity.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-925Optimal integration of facial form and motion during face recognition1501715422ZhaoB20167MZhaoIBülthoffSt. Pete Beach, FL, USA2016-05-15731Holistic processing—the tendency to perceive objects as indecomposable wholes—has long been viewed as a process specific for faces or objects-of-expertise. While some researchers argue that holistic processing is unique for processing of faces (domain-specific hypothesis), others propose that it results from automatized attention strategy developed with expertise (i.e., expertise hypothesis). While these theories differ in what causes holistic processing, they share a fundamental constraint for its generalization: Non-face objects cannot elicit face-like holistic processing in the absence of expertise. Contrary to this prevailing view, here we show that line-patterns with salient Gestalt information (i.e., connectedness, closure, and continuity between parts) can be processed as holistically as faces without any training. This face-like holistic processing of non-face objects also occurred when we tested faces and line patterns in different sessions on different days, suggesting that it was not due to the context effect incurred by testing both types of stimuli within a single session. Moreover, weakening the saliency of Gestalt information in line patterns reduced holistic processing for these stimuli, indicating the crucial role of Gestalt information in eliciting holistic processing. Taken together, these results indicate that, besides a top-down route based on expertise, holistic processing can be achieved via a bottom-up route relying merely on object-based information. Therefore, face-like holistic processing can extend beyond the domains of faces and objects-of-expertise, in contrary to current dominant theories.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-731Holistic Processing of Unfamiliar Line Patterns1501715422FademrechtNBBd20167LFademrechtJNieuwenhuisIBülthoffNBarracloughSde la RosaSt. Pete Beach, FL, USA2016-05-14280In real life humans need to recognize actions even if the actor is surrounded by a crowd of people but little is known about action recognition in cluttered environments. In the current study, we investigated whether a crowd influences action recognition with an adaptation paradigm. Using life-sized moving stimuli presented on a panoramic display, 16 participants adapted to either a hug or a clap action and subsequently viewed an ambiguous test stimulus (a morph between both adaptors). The task was to categorize the test stimulus as either ‘clap’ or ‘hug’. The change in perception of the ambiguous action due to adaptation is referred to as an ‘adaptation aftereffect’. We tested the influence of a cluttered background (a crowd of people) on the adaptation aftereffect under three experimental conditions: ‘no crowd’, ‘static crowd’ and ‘moving crowd’. Additionally, we tested the adaptation effect at 0° and 40° eccentricity. Participants showed a significant adaptation aftereffect at both eccentricities (p < .001). The results reveal that the presence of a crowd (static or moving) has no influence on the action adaptation effect (p = .07), neither in central vision nor in peripheral vision. Our results suggest that action recognition mechanisms and action adaptation aftereffects are robust even in complex and distracting environments.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-280Does action recognition suffer in a crowded environment?1501715422FademrechtBBd2015_27LFademrechtIBülthoffNEBarracloughSde la RosaChicago, IL, USA2015-10-18Actions often occur in the visual periphery. Here we measured the spatial extent of action sensitive perceptual channels across the visual field using a behavioral action adaptation paradigm. Participants viewed an action (punch or handshake) for a prolonged amount of time (adaptor) and subsequently categorized an ambiguous test action as either 'punch' or 'handshake'. The adaptation effect refers to the biased perception of the test stimulus due to the prolonged viewing of the adaptor and the resulting loss of sensitivity to that stimulus. Therefore the more a channel responds to a specific stimulus the higher is the adaptation effect for that certain channel. We measured the size of the adaptation effect as a function of the spatial distance between adaptor and test stimuli in order to determine if actions can be processed in spatially distinct channels. Specifically, we adapted participants at 0° (fixation), 20° and 40° eccentricity in three separate conditions to measure the putative spatial extent of action channels at these positions. In each condition, we measured the size of the adaptation effect at -60°,-40°,-20°, 0°,20°,40°,60° of eccentricity. We fitted Gaussian functions to describe the channel response of each condition and used the full width at half maximum (FWHM) of the Gaussians as a measure of the spatial extent of the action channels. In contrast to previous reports of an increase of midget ganglion cell dendritic field size with eccentricity (Dacey, 1993), our results showed that FWHM decreased with eccentricity (FWHM at 0°: 56°, FWHM at 20°: 29, FWHM at 40°: 26). We then asked whether the response of these action sensitive perceptual channels can be used to predict average recognition performance (d') of social actions across the visual field obtained in a previous study (Fademrecht et al. 2014). We used G(x) - the summed response of all three channels at eccentricity x, to predict recognition performance at eccentricity x. A simple linear transformation of the summed channel response of the form a+b*G(x) was able to predict 95.5% of the variation in the recognition performance. Taken together these results demonstrate that actions can be processed in separate spatially distinct perceptual channels, their FWHM decreases with eccentricity and can be used to predict action recognition performance in the visual periphery.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0The spatial extent of action sensitive perceptual channels decrease with visual eccentricity1501715422DobsSBG20157KDobsJSchultzIBülthoffJLGardnerSt. Pete Beach, FL, USA2015-09-00684Humans can easily extract who someone is and what expression they are making from the complex interplay of invariant and changeable visual features of faces. Recent evidence suggests that cortical mechanisms to selectively extract information about these two socially critical cues are segregated. Here we asked if these systems are independently controlled by task demands. We therefore had subjects attend to either identity or expression of the same dynamic face stimuli and examined cortical representations in topographically and functionally localized visual areas using fMRI. Six human subjects performed a task that involved detecting changes in the attended cue (expression or identity) of dynamic face stimuli (8 presentations per trial of 2s movie clips depicting 1 of 2 facial identities expressing happiness or anger) in 18-20 7min scans (20 trials/scan in pseudorandom order) in 2 sessions. Dorsal areas such as hMT and STS were disassociated from more ventral areas such as FFA and OFA by their modulation with task demands and their encoding of exemplars of expression and identity. In particular, dorsal areas showed higher activity during the expression task (hMT: p< 0.05, lSTS: p< 0.01; t-test) where subjects were cued to attend to the changeable aspects of the faces whereas ventral areas showed higher activity during the identity task (lOFA: p< 0.05; lFFA: p< 0.05). Specific exemplars of identity could be reliably decoded (using linear classifiers) from responses of ventral areas (lFFA: p< 0.05; rFFA: p< 0.01; permutation-test). In contradistinction, dorsal area responses could be used to decode specific exemplars of expression (hMT: p< 0.01; rSTS: p< 0.01), but only if expression was attended by subjects. Our data support the notion that identity and expression are processed by segregated cortical areas and that the strength of the representations for particular exemplars is under independent task control.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-684Independent control of cortical representations for expression and identity of dynamic faces1501715422ZhaoB20157MZhaoIBülthoffSt. Pete Beach, FL, USA2015-09-00698Does a face itself determine how well it will be recognized? Unlike many previous studies that have linked face recognition performance to individuals’ face processing ability (e.g., holistic processing), the present study investigated whether recognition of natural faces can be predicted by the faces themselves. Specifically, we examined whether short- and long-term recognition memory of both dynamic and static faces can be predicted according to face-based properties. Participants memorized either dynamic (Experiment 1) or static (Experiment 2) natural faces, and recognized them with both short- and long-term retention intervals (three minutes vs. seven days). We found that the intrinsic memorability of individual faces (i.e., the rate of correct recognition across a group of participants) consistently predicted an independent group of participants’ performance in recognizing the same faces, for both static and dynamic faces and for both short- and long-term face recognition memory. This result indicates that intrinsic memorability of faces is bound to face identity rather than image properties. Moreover, we also asked participants to judge subjective memorability of faces they just learned, and to judge whether they were able to recognize the faces in late test. The result shows that participants can extract intrinsic face memorability at encoding. Together, these results provide compelling evidence for the hypothesis that intrinsic face memorability predicts natural face recognition, highlighting that face recognition performance is not only a function of individuals’ face processing ability, but also determined by intrinsic properties of faces.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-698Intrinsic Memorability Predicts Short- and Long-Term Memory of Static and Dynamic Faces1501715422FademrechtBd20157LFademrechtIBülthoffSde la RosaSt. Pete Beach, FL, USA2015-09-00494Although actions often appear in the visual periphery, little is known about action recognition outside of the fovea. Our previous results have shown that action recognition of moving life-size human stick figures is surprisingly accurate even in far periphery and declines non-linearly with eccentricity. Here, our aim was (1) to investigate the influence of motion information on action recognition in the periphery by comparing static and dynamic stimuli recognition and (2) to assess whether the observed non-linearity in our previous study was caused by the presence of motion because a linear decline of recognition performance with increasing eccentricity was reported with static presentations of objects and animals (Jebara et al. 2009; Thorpe et al. 2001). In our study, 16 participants saw life-size stick figure avatars that carried out six different social actions (three different greetings and three different aggressive actions). The avatars were shown dynamically and statically on a large screen at different positions in the visual field. In a 2AFC paradigm, participants performed 3 tasks with all actions: (a) They assessed their emotional valence; (b) they categorized each of them as greeting or attack and (c) they identified each of the six actions. We found better recognition performance for dynamic stimuli at all eccentricities. Thus motion information helps recognition in the fovea as well as in far periphery. (2) We observed a non-linear decrease of recognition performance for both static and dynamic stimuli. Power law functions with an exponent of 3.4 and 2.9 described the non-linearity observed for dynamic and static actions respectively. These non-linear functions describe the data significantly better (p=.002) than linear functions and suggest that human actions are processed differently from objects or animals.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-494Recognition of static and dynamic social actions in the visual periphery1501715422BulthoffZ20157IBülthoffMZhaoSt. Pete Beach, FL, USA2015-09-00145Holistic face processing is often referred to as the inability to selectively attend to part of faces without interference from irrelevant facial parts. While extensive research seeks for the origin of holistic face processing in perceiver-based properties (e.g., expertise), the present study aimed to pinpoint face-based visual information that may support this hallmark indicator of face processing. Specifically, we used the composite face task, a standard task of holistic processing, to investigate whether facial surface information (e.g., texture) or facial shape information underlies holistic face processing, since both sources of information have been shown to support face recognition. In Experiment 1, participants performed two composite face tasks, one for normal faces (i.e., shape + surface information) and one for shape-only faces (i.e., without facial surface information). We found that facial shape information alone is as sufficient to elicit holistic processing as normal faces, indicating that facial surface information is not necessary for holistic processing. In Experiment 2, we tested whether facial surface information alone is sufficient to observe holistic face processing. We chose to control facial shape information instead of removing it by having all test faces to share exactly the same facial shape, while exhibiting different facial surface information. Participants performed two composite face tasks, one for normal faces and one for same-shape faces. We found a composite face effect in normal faces but not in same-shape faces, indicating that holistic processing is mediated predominantly by facial shape rather than surface information. Together, these results indicate that facial shape, but not surface information, underlies holistic face processing.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-145What Type of Facial Information Underlies Holistic Face Processing?1501715422BulthoffMT20157IBülthoffBMohlerIMThorntonLiverpool, UK2015-08-0051In most face recognition studies, learned faces are shown without a visible body to passive participants. Here, faces were attached to a body and participants were either actively or passively viewing them before their recognition performance was tested. 3D-laser scans of real faces were integrated onto sitting or standing full-bodied avatars placed in a virtual room. In the ‘active’ learning condition, participants viewed the virtual environment through a head-mounted display. Their head position was tracked to allow them to walk physically from one avatar to the next and to move their heads to look up or down to the standing or sitting avatars. In the ‘passive dynamic’ condition, participants saw a rendering of the visual explorations of the first group. In the ‘passive static’ condition, participants saw static screenshots of the upper bodies in the room. Face orientation congruency (up versus down) was manipulated at test. Faces were recognized more accurately when viewed in a familiar orientation for all learning conditions. While active viewing in general improved performance as compared to viewing static faces, passive observers and active observers - who received the same visual information - performed similarly, despite the absence of volitional movements for the passive dynamic observers.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-51Active and passive exploration of faces150171542215017FademrechtBBd20157LFademrechtNEBarracloughIBülthoffSde la RosaLiverpool, UK2015-08-00214Although actions often appear in the visual periphery, little is known about action recognition away from fixation. We showed in previous studies that action recognition of moving stick-figures is surprisingly good in peripheral vision even at 75° eccentricity. Furthermore, there was no decline of performance up to 45° eccentricity. This finding could be explained by action sensitive units in the fovea sampling also action information from the periphery. To investigate this possibility, we assessed the horizontal extent of the spatial sampling area (SSA) of action sensitive units in the fovea by using an action adaptation paradigm. Fifteen participants adapted to an action (handshake, punch) at the fovea were tested with an ambiguous action stimulus at 0°, 20°, 40° and 60° eccentricity left and right of fixation. We used a large screen display to cover the whole horizontal visual field of view. An adaptation effect was present in the periphery up to 20° eccentricity (p<0.001), suggesting a large SSA of action sensitive units representing foveal space. Hence, action recognition in the visual periphery might benefit from a large SSA of foveal units.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-214Seeing actions in the fovea influences subsequent action recognition in the periphery1501715422ZhaoB2015_27MZhaoIBülthoffAmsterdam, The Netherlands2015-03-1232We demonstrate that both encoding and memory processes affect recognition of own- and other-race faces differently. Static own-race faces are better recognized than static other-race faces but this other-race effect is not found for rigidly moving faces. Further, this effect is larger in short-term memory than in long-term memory.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-32Memory of Own- and Other-Race Faces: Influences of Encoding and Retention Processes1501715422FademrechtBd2014_27LFademrechtIBülthoffSde la RosaBeograd, Serbia2014-08-00103Recognizing actions of others in the periphery is required for fast and appropriate reactions to events in our environment (e.g. seeing kids running towards the street when driving). Previous results show that action recognition is surprisingly accurate even in far periphery (<=60° visual angle (VA)) when actions were directed towards the observer (front view). The front view of a person is considered to be critical for social cognitive processes (Schillbach et al., 2013). To what degree does the orientation of the observed action (front vs. profile view) influence the identification of the action and the recognition of the action's valence across the horizontal visual field? Participants saw life-size stick figure avatars that carried out one of six motion-captured actions (greeting actions: handshake, hugging, waving and aggressive actions: slapping, punching and kicking). The avatar was shown on a large screen display at different positions up to 75° VA. Participants either assessed the emotional valence of the action or identified the action either as ‘greeting’ or as ‘attack’. Orientation had no significant effect on accuracy. Reaction times were significantly faster for profile than for front views (p=0.003) for both tasks, which is surprising in light of recent suggestionsnonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-103A matter of perspective: action recognition depends on stimulus orientation in the periphery1501715422ZhaoB2014_27MZhaoIBülthoffBeograd, Serbia2014-08-0076Many studies have demonstrated better recognition of own- than other-race faces. However, little is known about whether memories of unfamiliar own- and other-race faces decay similarly with time. We addressed this question by probing participants’ memory about own- and other-race faces both immediately after learning (immediate test) and one week later (delayed test). In both learning and test phases, participants saw short movie wherein a person was talking in front of the camera (the sound was turned off). Two main results emerged. First, we observed a cross-race deficit in recognizing other-race faces in both immediate and delayed tests, but the cross-race deficit was reduced in the latter. Second, recognizing faces immediately after learning was not better than recognizing them one week later. Instead, overall performance was even better at delayed test than at immediate test. This result was mainly due to improved recognition for other-race female faces, which showed comparatively low performance when tested immediately. These results demonstrate that memories of both own- and other-race faces sustain for a relative long time. Although other-race faces are less well recognized than own-race faces, they seem to be maintained in long-term memory as well as, and even better than, own-race faces.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-76Long-term memory for own- and other-race faces1501715422FademrechtBd2014_37LFademrechtIBülthoffSde la RosaTübingen, Germany2014-06-00nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Peripheral Vision and Action Recognition1501715422ZhaoHB2014_37MZhaoWGHaywardIBülthoffTübingen, Germany2014-06-00nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Race of Face Affects Various Face Processing Tasks Differently1501715422EsinsBS20147JEsinsIBülthoffJSchultzSt. Pete Beach, FL, USA2014-05-211436Humans rely strongly on the shape of other peoples faces to recognize them. However, faces also change appearance between encounters, for example when people put on glasses or change their hair-do. This can affect face recognition in certain situations, e.g. when recognizing faces that we do not know very well or for congenital prosopagnosics. However, additional cues can be used to recognize faces: faces move as we speak, smile, or shift gaze, and this dynamic information can help to recognize other faces (Hill & Johnston, 2001). Here we tested if and to what extent such dynamic information can help congenital prosopagnosics to improve their face recognition. We tested 15 congenital prosopagnosics and 15 age- and gender matched controls with a test created by Raboy et al. (2010). Participants learned 18 target identities and then performed an old-new-judgment on the learned faces and 18 distractor faces. During the test phase, half the target faces exhibited everyday changes (e.g. modified hairdo, glasses added, etc.) while the other targets did not change. Crucially, half the faces were presented as short film sequences (dynamic stimuli) while the other half were presented as five random frames (static stimuli) during learning and test. Controls and prosopagnosics recognized identical better than changed targets. While controls recognized faces better in the dynamic than in the static condition, prosopagnosics performance was not better for dynamic compared to static stimuli. This difference between groups was significant. The absence of a dynamic advantage in prosopagnosics suggests that dysfunctions in congenital prosopagnosia might not only be restricted to ventral face-processing regions, but might also involve lateral temporal regions where facial motion is known to be processed (e.g. Haxby et al., 2000).nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-1436Facial motion does not help face recognition in congenital prosopagnosics1501715422ZhaoB20147MZhaoIBülthoffSt. Pete Beach, FL, USA2014-05-201262Previous studies have shown that face race influences various aspects of face processing, including face identification (Meissner & Brigham, 2001), holistic processing (Michel et al., 2006), and processing of featural and configural information (Hayward et al., 2008). However, whether these various aspects of other-race effects (ORE) arise from the same underlying mechanism or from independent ones remain unclear. To address this question, we measured those manifestations of ORE with different tasks, and tested whether the magnitude of those OREs are related to each other. Each participant performed three tasks. (1) The original and a Chinese version of Cambridge Face Memory Tests (CFMT, Duchaine & Nakayama, 2006; McKone et al., 2012), which were used to measure the ORE in face memory. (2) A part/whole sequential matching task (Tanaka et al., 2004), which was used to measure the ORE in face perception and in holistic processing. (3) A scrambled/blurred face recognition task (Hayward et al., 2008), which was used to measure the ORE in featural and configural processing. We found a better recognition performance for own-race than other-race faces in all three tasks, confirming the existence of an ORE across various tasks. However, the size of the ORE measured in all three tasks differed; we found no correlation between the OREs in the three tasks. More importantly, the two measures of the ORE in configural and holistic processing tasks could not account for the individual differences in the ORE in face memory. These results indicate that although face race always influence face recognition as well as configural and featural processing, different underlying mechanisms are responsible for the occurrence of ORE for each aspect of face processing tested here.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-1262Face Race Affects Various Types of Face Processing, but Affects Them Differently1501715422FademrechtBd20147LFademrechtIBülthoffSde la RosaSt. Pete Beach, FL, USA2014-05-201006The recognition of actions is critical for human social functioning and provides insight into both the active and the inner states (e.g. valence) of another person. Although actions often appear in the visual periphery little is known about action recognition beyond foveal vision. Related previous research showed that object recognition and object valence (i.e. positive or negative valence) judgments are relatively unaffected by presentations up to 13° visual angle (VA) (Calvo et al. 2010). This is somewhat surprising given that recognition performance of words and letters sharply decline in the visual periphery. Here participants recognized an action and evaluated its valence as a function of eccentricity. We used a large screen display that allowed presentation of stimuli over a visual field from -60 to +60° VA. A life-size stick figure avatar carried out one of six motion captured actions (3 positive actions: handshake, hugging, waving; 3 negative actions: slapping, punching and kicking). 15 participants assessed the valence of the action (positive or negative action) and another 15 participants identified the action (as fast and as accurately as possible). We found that reaction times increased with eccentricity to a similar degree for the valence and the recognition task. In contrast, accuracy performance declined significantly with eccentricity for both tasks but declined more sharply for the action recognition task. These declines were observed for eccentricities larger than 15° VA. Thus, we replicate the findings of Calvo et al. (2010) that recognition is little affected by extra-foveal presentations smaller than 15° VA. Yet, we additionally demonstrate that visual recognition performance of actions declined significantly at larger eccentricities. We conclude that large eccentricities are required to assess the effect of peripheral presentation on visual recognition.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-1006Influence of eccentricity on action recognition1501715422DobsSBG20137KDobsJSchultzIBülthoffJLGardnerSan Diego, CA, USA2013-11-10Identity and facial expression of faces we interact with are represented as invariant and changeable aspects, respectively - what are the cortical mechanisms that allow us to selectively extract information about these two important cues? We had subjects attend to either identity or expression of the same dynamic face stimuli and decoded concurrently measured fMRI activity to ask whether distinct cortical areas were differentially engaged in these tasks.
We measured fMRI activity (3x3x3mm, 34 slices, TR=1.5, 4T) from 6 human subjects (2 female) as they performed a change-detection task on dynamic face stimuli. At trial onset, a cue (letters ‘E’ or ‘I’) was presented (0.5s) which instructed subjects to attend to either the expression or the identity of animations of faces (8 presentations per trial of 2s movie clips depicting 1 of 2 facial identities expressing happiness or anger). Subjects were to report (by button press) changes in the cued dimension (these occurred in 20% of trials) and ignore changes in the uncued dimension. Subjects successfully attended to the cued dimension (mean d’=2.4 for cued and d’=-1.9 for uncued dimension), and sensitivity did not differ across tasks (F(1,10)=0.19, p>0.6). Subjects performed 18-20 7min scans (20 trials/scan in pseudorandom order) in 2 sessions.
We built linear classifiers to decode the attended dimension. Face-sensitive areas were defined in separate localizer scans as clusters of voxels responding more to faces than to houses. To independently determine the voxels to be included in the analyses, we ran a task localizer in which 10s grey screen was alternated with 10s of stimuli+task. For each area, we selected the 100 voxels whose signal correlated best with task/no task alternations. BOLD signal in these voxels was averaged over 3-21s of each trial of the main experiment, concatenated across subjects and sessions and used to build the classifiers.
We found that we could decode the attended dimension on cross-validated data from many visual cortical areas (percentage correct classifications: FFA: 68%, MT: 73%, OFA: 79%, STS: 68%, V1: 77%; p<0.05, permutation test). However, while ventral face-sensitive areas (OFA, FFA) showed larger BOLD signal during attention-to-identity than attention-to-expression trials (p<0.001, t-test), motion processing areas (MT, STS) showed the opposite effect (p<0.001, t-test). Our results suggest that attending to expression or identity engages areas involved in stimulus-specific processing of these two dimensions. Moreover, attending to expression encoded in facial motion recruits motion processing areas, while attending to face identity activates ventral face-sensitive areas.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Attending to expression or identity of dynamic faces engages different cortical areas1501715422ZhaoB20137MZhaoIBülthoffBremen, Germany2013-08-00204People recognize own-race faces more accurate than those of other races. This other-race effect (ORE) has been frequently observed when faces are learned from static, single view images. However, the single-view face learning may prevent the acquisition of useful information (e.g., 3D face shape) for recognizing unfamiliar, other-race faces. Here we tested whether learning faces from multiple viewpoints reduces the ORE. In Experiment 1 participants learned faces from a single viewpoint (left or right 15° view) and were tested with front view(0° view) using an old/new recognition task. They showed better recognition performances for own-race faces than that for other-race faces, demonstrating the ORE in face recognition across viewpoints. In Experiment 2 participants learned each face from four viewpoints (in order, left 45°, left 15°, right 15°, and right 45° views) and were tested in the same way as in Experiment 1. Participants recognized own- and other-race faces equally well, eliminating the ORE. These results suggest that learning faces from multiple viewpoints improves the recognition of other-race faces more than that for own-race faces, and that previously observed ORE is caused in part by the non-optimal encoding condition for other-race faces.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-204Learning Faces from Multiple Viewpoints Eliminates the Other-Race Effect1501715422BrielmannBA20137ABrielmannIBülthoffRArmannBremen, Germany2013-08-00204The other-race effect is the widely known difficulty at recognizing faces of another race. Further, it has been clearly established in eye tracking studies that observers of different cultural background exhibit different viewing strategies. Whether those viewing strategies depend also on the type of faces shown (same-race vs. other-race faces) is under much debate. Using eye tracking, we investigated whether European observers look at different facial features when viewing Asian and Caucasian faces in a face race categorization task. Additionally, to investigate the influence of viewpoints on gaze patterns, we presented faces in frontal, half profile and profile views. Even though fixation patterns generally changed across views, fixations to the eyes were more frequent for Caucasian faces and fixations to the nose were more frequent for Asian faces, independent of face orientation. In contrast, how fixations to cheeks, mouth and outline regions changed according to the face’s race was also dependent on face orientations. In sum, our results indicate that we mainly look at prominent facial features, albeit which features are fixated most often critically depends on face race and orientation.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-204Looking at faces from different angles: Europeans fixate different features in Asian and Caucasian faces1501715422DobsBBVCS20137KDobsIBülthoffMBreidtQCVuongCCurioJWSchultzBremen. Germany2013-08-00197A great deal of social information is conveyed by facial motion. However, understanding how observers use the natural timing and intensity information conveyed by facial motion is difficult because of the complexity of these motion cues. Here, we systematically manipulated animations of facial expressions to investigate observers’ sensitivity to changes in facial motion. We filmed and motion-captured four facial expressions and decomposed each expression into time courses of semantically meaningful local facial actions (e.g., eyebrow raise). These time courses were used to animate a 3D head model with either the original time courses or approximations of them. We then tested observers’ perceptual sensitivity to these changes using matching-to-sample tasks. When viewing two animations (original vs. approximation), observers chose original animations as most similar to the video of the expression. In a second experiment, we used several measures of stimulus similarity to explain observers’ choice of which approximation was most similar to the original animation when viewing two different approximations. We found that high-level cues about spatio-temporal characteristics of facial motion (e.g., onset and peak of eyebrow raise) best explained observers’ choices. Our results demonstrate the usefulness of our method; and importantly, they reveal observers’ sensitivity to natural facial dynamics.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-197Quantifying Human Sensitivity to Spatio-Temporal Information in Dynamic Faces1501715422JungBTLA20137W-MJungIBülthoffIThorntonS-WLeeRArmannNaples, FL, USA2013-05-13861One possibility to overcome the processing limitation of the visual system is to attend selectively to relevant information only. Another strategy is to process sets of objects as ensembles and represent their average characteristics instead of individual group members (e.g., mean size, brightness, orientation). Recent evidence suggests that ensemble representation might occur even for human faces (for a summary, see Alvarez, 2011), i.e., observers can extract the mean emotion, sex, and identity from a set of faces (Habermann & Whitney, 2007; de Fockert & Wolfenstein, 2009). Here, we extend this line of research into the realm of face race: Can we extract the "mean race" of a set of faces when no conscious perception of single individuals is possible? Moreover, does the visual system process own- and other-race faces differently at this stage? Face stimuli had the same (average) male identity but were morphed, at different levels, in between Asian and Caucasian appearance. Following earlier studies (e.g., Habermann & Whitney, 2007, 2010), observers were briefly (250ms) presented with random sets of 12 of these faces. They were then asked to adjust a test face to the perceived mean race of the set by "morphing" it between Asian and Caucasian appearance. The results show that for most participants the response error distribution is significantly different from random, while their responses are centered around the real stimulus set mean - suggesting that they are able to extract "mean race". Also, we find a bias towards responding more "Asian" than the actual mean of a face set. All participants tested so far are South Korean (from Seoul), indicating that even at this early (unconscious) processing stage, the visual system distinguishes between own- and other-race faces, giving more weight to the former. Follow-up experiments on Caucasian participants will be performed to validate this observation.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-861The Role of Race in Summary Representations of Faces1501715422EsinsSKWB2012_27JEsinsJSchultzBRKimCWallravenIBülthoffSchramberg, Germany2012-11-0038Congenital prosopagnosia, an innate impairment in recognizing faces, as well as the otherrace-effect, the disadvantage in recognizing faces of foreign races, both influence face recognition abilities. Here we compared both phenomena by testing three groups: German congenital
prosopagnosics (cPs), unimpaired German and unimpaired South Korean participants (n=23 per group), on three tests with Caucasian faces. First we ran the Cambridge Face Memory
Test (Duchaine & Nakayama, 2006 Neuropsychologia 44 576-585). Participants had to recognize Caucasian target faces in a 3AFC task. German controls performed better than
Koreans (p=0.009) who performed better than prosopagnosics (p=0.0001). Variation of the individual performances was larger for cPs than for Koreans (p = 0.028). In the second experiment, participants rated the similarity of Caucasian faces (in-house 3D face-database) which differed parametrically in features or second order relations (configuration). We found differences between sensitivities to change type (featural or configural, p=0) and between
groups (p=0.005) and an interaction between both factors (p = 0.019). During the third experiment, participants had to learn exemplars of artificial objects (greebles), natural objects (shells), and faces and recognize them among distractors. The results showed an interaction (p = 0.005) between stimulus type and participant group: cPs where better for non-face stimuli and worse for face stimuli than the other groups. Our results suggest that congenital
prosopagnosia and the other-race-effect affect face perception in different ways. The broad range in performance for the cPs directs the focus of our future research towards looking for different forms of congenital prosopagnosia.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-38Comparing the other race effect and congenital prosopagnosia using a three-experiment test battery1501715422EsinsBKS20127JEsinsIBülthoffIKennerknechtJSchultzAlghero, Italy2012-09-00113Congenital prosopagnosia, the innate impairment in recognizing faces exhibits diverse deficits. Due to this heterogeneity the possible existence of subgroups of the impairment was suggested (eg Kress and Daum, 2003 Behavioural Neurology14109-21). We examined 23 congenital prosopagnosics (cPAs) identified via a screening questionnaire (as used in Stollhoff, Jost, Elze, and Kennerknecht, 2011 PLoS ONE6e15702) and 23 age-, gender and educationally matched controls with a battery consisting of nine different tests. These included well known tests like the Cambridge Face Memory Test (CFMT, Duchaine and Nakayama, 2006 Neuropsychologia44576-85), a Famous Face Test (FFT), and new, own tests about object and face recognition. As expected, cPAs had lower CFMT and FFT scores than the controls. Analyses of the performance patterns across the nine tests suggest the existence of subgroups within both cPAs and controls. These groups could not be revealed only based on the CFMT and FFT scores, indicating the necessity of tests addressing different, specific aspects of object and face perception for the identification of subgroups. Current work focuses on characterizing the subgroups and identifying the most useful tests.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-113Can a test battery reveal subgroups in congenital prosopagnosia?1501715422DobsBCS20127KDobsIBülthoffCCurioJSchultzNaples, FL, USA2012-08-0035Previous research has shown that facial motion can convey information about identity in addition to facial form (e.g. Hill & Johnston, 2001). The present study aims at finding whether identity judgments vary depending on the kinds of facial movements and the task performed. To this end, we used a recent facial motion capture and animation system (Curio et al., 2006). We recorded different actors performing classic emotional facial movements (e.g. happy, sad) and non-emotional facial movements occurring in social interactions (e.g. greetings, farewell). Only non-rigid components of these facial movements were used to animate one single avatar head. In a between-subject design, four groups of participants performed identity judgments based on emotional or social facial movements in a same-different (SD) or a delayed matching-to-sample task (XAB). In the SD task, participants watched two distinct facial movements (e.g. happy and sad) and had to choose whether the same or different actors performed these facial movements. In the XAB task, participants saw one target facial movement X (e.g. happy) performed by one actor followed by two facial movements of another kind (e.g. sad) performed by two actors. Participants chose which of the latter facial movements was performed by the same actor as the one performing X. Prior to the experiment, participants were familiarized with the actors by watching them perform facial movements not subsequently tested. Participants were able to judge actor identities correctly in all conditions, except for the SD task performed on the emotional stimuli. Sensitivity to identity as measured by d-prime was higher in the XAB than in the SD task. Furthermore, performance was higher for social than for emotional stimuli. Our findings reveal an effect of task on identity judgments based on facial motion, and suggest that such judgments are easier when facial movements are less stereotypical.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-35Investigating factors influencing the perception of identity from facial motion1501715422Bulthoff2012_57IBülthoffNaples, FL, USA2012-08-001282We can quickly and easily judge faces in terms of their ethnicity. What is the basis for our decision? Other studies have used either eye tracking (e.g., Armann & Bülthoff 2009) or the Bubbles method (e.g., Gosselin & Schyns 2001) in categorization tasks to investigate which facial features are used for sex or identity classification. The first method investigates which parts are preferentially looked at while the latter method shows which facial regions, when shown in isolation during the task, leads to correct classification. Here we measured the influence of facial features on ethnicity classification when they are embedded in the face of the other ethnicity. Asian and Caucasian faces of our 3D face database (http://faces.kyb.tuebingen.mpg.de) had been paired according to sex, age and appearance. We used 18 pairs of those Asian-Caucasian faces to create a variety of mixed-race faces. Mixed-race faces were obtained by exchanging one of the following facial features between both faces of a pair: mouth, nose, facial contour, shape, texture (skin) and eyes. We showed original and modified faces one by one in a simple ethnicity classification task. All faces were turned 20 degrees to the side for a more informative view of nose shape, face shape and facial contour while eyes and mouth and general face textures were still fully visible. Because of skin color differences between exchanged parts and original faces, all 3D faces were rendered as grey-level images. The results of 24 Caucasian participants show that the eyes and the texture of a face are major determinants for ethnicity classification, more than face shape and face contour, while mouth and nose had weak influence. Response times showed that participants were faster at classifying less ambiguous faces.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-1282What gives a face its ethnicity?1501715422EsinsSKWB20127JEsinsJSchultzBRKimCWallravenIBülthoffIncheon, South Korea2012-07-00688Congenital prosopagnosia, an innate impairment in recognizing faces, as well as the other-race-effect, the disadvantage in recognizing faces of foreign races, both influence face recognition abilities.
Here we compared both phenomena by testing three groups: German congenital prosopagnosics (cPs), unimpaired German and unimpaired South Korean participants (n=23 per group), on three tests with Caucasian faces.
First we ran the Cambridge Face Memory Test (Duchaine & Nakayama, 2006 Neuropsychologia 44 576-585). Participants had to recognize Caucasian target faces in a 3AFC task. German controls performed better than Koreans (p=0.009) who performed better than prosopagnosics (p=0.0001). Variation of the individual performances was larger for cPs than for Koreans (p = 0.028).
In the second experiment, participants rated the similarity of Caucasian faces (in-house 3D face-database) which differed parametrically in features or second order relations (configuration). We found differences between sensitivities to change type (featural or configural, p=0) and between groups (p=0.005) and an interaction between both factors (p = 0.019).
During the third experiment, participants had to learn exemplars of artificial objects (greebles), natural objects (shells), and faces and recognize them among distractors. The results showed an interaction (p = 0.005) between stimulus type and participant group: cPs where better for non-face stimuli and worse for face stimuli than the other groups.
Our results suggest that congenital prosopagnosia and the other-race-effect affect face perception in different ways. The broad range in performance for the cPs directs the focus of our future research towards looking for different forms of congenital prosopagnosia.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2012/APCV-2012-Poster-Esins.pdfpublished-688Comparing the other-race-effect and congenital Prosopagnosia using a three-experiment test battery1501715422JungAB20127WJungRArmannIBülthoffIncheon, South Korea2012-07-00697What gives a face its race?By biological criteria, human “races” do not exist (e.g., Cosmides et al., 2003). Nevertheless, every-day life and research from various fields show that we robustly and reliably perceive humans as belonging to different race groups. Here, we investigate the bases for our quick and easy judgments, by measuring the influence of manipulated facial features on race classification. Asian and Caucasian faces of our 3-dimensional face database (http://faces.kyb.tuebingen.mpg.de) were paired according to sex, age and overall appearance. With these Asian-Caucasian face pairs we created a variety of mixed-race faces, by exchanging facial features between both faces of a pair: eyes, nose, mouth, “outer” features, shape or texture. Original and modified faces were shown in a simple race classification task. We tested 24 Westerners (Germany) and 24 Easterners (South Korea). In both groups, eyes and texture were major determinants for race classification, followed by face shape, and then outer features, mouth, nose, which only had a weak influence on perceived face. Eastern participants classified Caucasian original faces better than Asian original faces, while Western participants were similarly good at classifying both races. Western participants - but not their Eastern counterparts - were less susceptible to eye, shape and texture manipulations in other-race faces than in their own-race faces. A closer look at the data suggests that this effect mainly originates from differences in processing male and female faces in Western participants only. Our results provide more evidence of differences between observers from different cultural and ethnic backgrounds in face perception and processing.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2012/APCV-2012-Jung.pdfpublished-697What gives a face its race?1501715422GaissertWvB20117NGaissertSWaterkampLvan DamIBülthoffToulouse, France2011-09-00134When humans have to categorize objects they often rely on shape as a deterministic feature. However, shape is not exclusive to the visual modality: the haptic system is also an expert in identifying shapes. This raises the question whether humans store separate modality-dependent shape representations or if one multimodal representation is formed. To better understand how humans categorize objects based on shape we created a set of computer-generated amoeba-like objects varing in defined shape steps. These objects were then printed using a 3D printer to generate tangible stimuli. In a discrimination task and a categorization task, participants either visually or haptically explored the objects. We found that both modalities lead to highly similar categorization behavior indicating that the processes underlying categorization are highly similar in both modalities. Next, participants were trained on special shape categories by using the visual modality alone or by using the haptic modality alone. As expected, visual training increased visual performance and haptic training increased haptic performance. Moreover, we found that visual training on shape categories greatly improved haptic performance and vice versa. Our results point to a shared representation underlying both modalities, which accounts for the surprisingly strong transfer of training across the senses.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-134Cross-modal transfer in visual and haptic object categorization1501715422DobsKBSC20117KDobsMKleinerIBülthoffJSchultzCCurioToulouse, France2011-09-001153D facial animation systems allow the creation of well-controlled stimuli to study face processing. Despite this high level of control, such stimuli often lack naturalness due to artificial facial dynamics (eg linear morphing). The present study investigates the extent to which human visual perception can be fooled by artificial facial motion. We used a system that decomposes facial motion capture data into time courses of basic action shapes (Curio et al, 2006 APGV 1 77–84). Motion capture data from four short facial expressions were input to the system. The resulting time courses and five approximations were retargeted onto a 3D avatar head using basic action shapes created manually in Poser. Sensitivity to the subtle modifications was measured in a matching task using video sequences of the actor performing the corresponding expressions as target. Participants were able to identify the unmodified retargeted facial motion above chance level under all conditions. Furthermore, matching performance for the different approximations varied with expression. Our findings highlight the sensitivity of human perception for subtle facial dynamics. Moreover, the action shape-based system will allow us to further investigate the perception of idiosyncratic facial motion using well-controlled facial animation stimuli.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-115Investigating idiosyncratic facial dynamics with motion retargeting1501715422LeeBAWB20117RKLeeIBülthoffRAmmannCWallravenHHBülthoffNaples, FL, USA2011-09-00626race (the other-race effect or ORE) has been widely cited. Nevertheless, recognizing the identity of a face is a complex task among many others; hence it might be premature to conclude that own-race faces are always easier to
process. We investigated whether same-race faces still have a processing advantage over other-race faces when only ethnicity-related information is available to differentiate between faces. We morphed the ethnicity of 20 Caucasians and 20 Asians faces toward their other-race counterpart while keeping their idiosyncratic, identity-related features. Morphing was done at three levels (20%, 50%, and 80% toward the other race). The task for two groups of participants (25 Tübingen and 26 Seoul participants) was to report which face looks more Caucasian (or Asian) after looking at the original face and a morphed face sharing the same idiosyncratic features. Both faces were presented side by side on a computer monitor in one task and sequentially
in another task. Importantly, we found no evidence for an ORE in participants’ performance and no performance difference between Tübingen and Seoul participants. Both groups were equally good and equally fast at
comparing the ethnicity of two faces regardless of the task, the ethnicity of the faces and the question asked. However, we did find evidence that Seoul and Tübingen participants used different viewing strategies. By investigating their eye-movements in the sequential task, we found that the ethnicity of participants affected fixation durations on specific areas of the face, especially
the nose. Also, the type of questions asked and stimulus race altered the pattern of eye movements. These results suggest that although Caucasians and Asians were equally good at dealing with ethnicity information of both races, they might employ different viewing strategies.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-626The other-race effect is not ubiquitous1501715422EsinsBS20117JEsinsIBülthoffJSchultzNaples, FL, USA2011-09-00673An important aspect of face recognition involves the role of featural and configurational information for face perception (e.g. Tanaka and Farah, 1993; Yovel and Duchaine, 2006; Rotshtein et al, 2007). In our study, we investigated the influence of featural and configural information on perceived similarity between faces. Eight pairs of male faces were chosen from our digital face database (http://faces.kyb.tuebingen.mpg.de). The texture and the face shape for both faces in a pair were equalized to create 2 basis faces that differed only in their inner facial features and their configuration, but not in face shape or texture. A computer algorithm allowed to parametrically morph the features, the configuration, or both between the two basis faces of a pair. In our case the morphing was done in 25% steps. 24 participants rated the similarity between pairs of the created faces using a 7-point Likert scale. The faces to compare came from the same basis face pair and could differ either in features or in configuration by 0%, 25%, 50%, 75% or 100%. The results revealed that for the same amount of morphing, faces differing by their features are perceived as less similar than faces differing by their configurations. These findings replicate previous results obtained with less natural or less controlled conditions. Furthermore, we found that linear increases of the difference between both faces in configural or featural information resulted in a nonlinear increase of perceived dissimilarity. An important aspect for the relevance of our results is how natural the face stimuli look like. We asked 24 participants to rate the naturalness of all stimuli including the original faces and the created faces. Despite numerous manipulations, the vast majority of our created face stimuli were rated as natural as the original faces.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2011/VSS-2011-Esins.pdfpublished-673The role of featural and configural information for perceived similarity between faces1501715422BulthoffSMT20117IBülthoffSShrimptonBJMohlerIMThorntonNaples, FL, USA2011-09-00596In a previous series of desktop experiments we found no evidence that individuals' height influenced their representation of others' faces or their ability to process faces viewed from above or below (VSS 2009). However, in those experiments face orientation and body height were ambiguous as isolated faces were shown on a computer screen to an observer sitting on a chair. To address those concerns and to specifically examine the influence of learned viewpoint, we created a virtual museum containing 20 full-bodied avatars (statues) that were either sitting or standing. Using a head-mounted display, observers walked through this virtual space three times, approached each statue and viewed them from any horizontal (yaw) angle without time restrictions. We equated eye-level - and thus simulated height – for all participants and restricted their vertical movement to ensure that the faces of sitting avatars were always viewed from above and standing avatars from below. After familiarization, recognition was tested using a standard old-new paradigm in which 2D images of the learnt faces were shown from various viewpoints. Results showed a clear influence of learned viewpoint. Faces that had been learned from above (below) were recognized more quickly and accurately in that orientation than from the opposite orientation. Thus, recognition of specific, newly learned faces appears to be view-dependent in terms of pitch angle. Our failure to find a height effect in our previous study suggests that the variety of views of human faces experienced during a lifetime and possibly the preponderance of conversational situations between humans at close range typically counteracts any influence that body size might have on a person's viewing experience of others' faces.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-596Using avatars to explore height/pitch effects when
learning new faces150171542267877IBülthoffRKLeeCWallravenHHBülthoffLausanne, Switzerland2010-08-0090Generally, faces of one’s own ethnicity are better remembered than faces of another race. The mechanisms of this other race effect (ORE) are still unresolved. The present study investigates whether expertise for own-race results in ORE in a discrimination task when only race-specifying information varies between faces, with no interference of identity change and no memory load. If expertise is an important factor for ORE, Caucasian participants, for example, should better discriminate between two Caucasian faces presented side by side than between two Asian faces. We tested participants in Seoul and Tübingen with pairs of Asian or Caucasian faces. Their task was to tell which face of the pair was either more Asian or more Caucasian. Although we found that Asian face pairs were unexpectedly but consistently better discriminated than Caucasian faces, this Asian advantage did not differ between both city groups. Our results show furthermore that Seoul and Tübingen participants’ discrimination performance was similar for Asian and Caucasian faces. These findings suggests that when there is no memory component involved in the task and when face appearance only differs in race-specifying information, own-race expertise does not result in better performance for own-race faces.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-90No other-race effect found in a task using faces differering only in race-specifying information150171542267287RGMArmannLJefferyACalderIBülthoffGRhodesNaples, FL, USA2010-05-00706High-level perceptual aftereffects have revealed that faces are coded relative to norms that are dynamically updated by experience. The nature of these norms and the advantage of such a norm-based representation, however, are not yet fully understood. Here, we used adaptation techniques to get insight into the perception of faces of different race categories. We measured identity aftereffects for adapt-test pairs that were opposite a race-specific average and pairs that were opposite a ‘generic’ average, made by morphing together Asian and Caucasian faces. Aftereffects were larger following exposure to anti-faces that were created relative to the race-specific (Asian and Caucasian) averages than to anti-faces created using the mixed-race average. Since adapt-test pairs that lie opposite to each other in face space generate larger identity aftereffects than non-opposite test pairs, these results suggest that Asian and Caucasian faces are coded using race-specific norms. We also found that identification thresholds were lower when targets were distributed around the race-specific norms than around the mixed-race norm, which is also consistent with a functional role for race-specific norms.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-706Race-specific norms for coding face identity and a functional role for norms1501715422Bulthoff20097IBülthoffRegensburg, Germany2009-08-0078According to Bruce and Young's (1986 British Journal of Psychology 77 305 - 327) classic model of face recognition, sex-related information about a face is accessed independently of information about identity. Therefore familiarity with a face should not influence sex categorization. This issue of independence has remained controversial as it has been supported in some studies and questioned in others. Here we used faces that were presented in two conditions: sex-unchanged and sex-changed. Participants were very familiar with some of the identities. For all participants, the unchanged familiar faces presented congruent identity and sex information while the sex-changed familiar faces presented incongruent identity and sex information. Participants performed a sex categorization task on all familiar and unfamiliar faces presented in the unchanged and sex-changed condition. They were asked to ignore identity and base their responses solely on the sex appearance of the faces. Our results show that participants were slower and less correct for sex-changed than for unchanged familiar faces while those differences did not appear for unfamiliar faces. These results indicate that sex and identity are not independent as participants could not ignore identity information while doing a sex categorization task.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-78Sex categorization is influenced by facial information about identity150171542259437NGaissertCWallravenIBülthoffNew York, NY, USA2009-07-00172173Categorization studies have primarily focused on the visual percept of objects. But in every-day life humans combine percepts from different modalities. To better understand this cue combination and to learn more about the mechanisms underlying categorization, we performed different categorization tasks visually and haptically and compared the two modalities. All experiments used the same set of complex, parametrically-defined, shell-like objects based on three shape parameters (see figure and [Gaissert, N., C. Wallraven and H. H. BÃ¼lthoff: Analyzing perceptual representations of complex, parametrically-defined shapes using MDS. Eurohaptics 2008, 265-274]). For the visual task, we used printed pictures of the objects, whereas for the haptic experiments, 3D plastic models were generated using a 3D printer and explored by blindfolded participants using both hands.
Three different categorization tasks were performed in which all objects were presented to participants simultaneously. In an unsupervised task participants had to categorize the objects in as many groups as they liked to. In a semi-supervised task participants had to form exactly three groups. In a supervised task participants received three prototype objects (see figure) and had to sort all other objects into three categories defined by the prototypes. The categorization was repeated until the same groups were formed twice in a row. The amount of repetitions needed across modalities was the same, showing that the task was equally hard visually and haptically. For more detailed analyses we generated similarity matrices based on which stimulus was paired with which other stimulus. As a measure of consistency â€“ within and across modalities as well as within and across tasks â€“ we calculated cross correlations between these matrices (see figure). Correlations within modalities were always higher than across modalities. In addition, as expected, the more constrained the task, the more consistently participants grouped the stimuli. Critically, multi-dimensional scaling analysis of the similarity matrices showed that all three shape parameters were perceived visually and haptically in all categorization tasks, but that the weighting of the parameters was dependent on the modality. In line with our previous results, this demonstrates the remarkable robustness of visual and haptic processing of complex shapes.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published1From unsupervised to supervised categorization in vision and haptics150171542256937RArmannIBülthoffUtrecht, Netherlands2008-08-00117Categorical perception (CP) has been demonstrated for face identity and facial expression, while conflicting results have been reported for sex. Furthermore, the question whether processing of sex and identity information is linked remains open. Based on extensive ratings of faces and sex morphs from our face database, we created 'controlled' male and female faces with similar perceived degrees of 'maleness' and 'femaleness'. We then examined CP of sex for these faces with classical discrimination and classification tasks using sex continua. Participants were naive (1), or had been familiarized with average faces of both sexes (2), or with the 'controlled' male and female faces (3). Our results confirm the lack of naturally occurring CP for sex in (1). Furthermore, they provide more evidence for the linked processing of sex and identity, as only participants in (3) showed clear CP. We found no evidence that familiarization with sex information (as given by average male and female faces) transfers to individual faces.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-117Categorical perception of male and female faces and the single-route hypothesis150171542250217IBülthoffQCVuongArezzo, Italy2007-08-00146Whether recognition and categorization are parallel or serial processes remains controversial. To
address this, we investigated whether face recognition is influenced by task-irrelevant face categ-
ories. We examined the recognition of a target face presented in the context of other faces of
the same or different racial category using a same ^ different matching task. Caucasian partici-
pants were presented during learning with a set of six faces displaying one target face among
different numbers of same-race faces. Participants recognized Caucasian targets better when five
same-race faces rather than a single same-race face were present in the set, while this effect was
absent for Asian targets. Surprisingly, participants recognized Asian targets better in sets with
equal numbers of Asian and Caucasian context faces. Similar experiments, but with novel
objects, were conducted in which categories were defined by similarity or expertise. These factors
did not fully account for the context effects observed with faces. Overall, the results suggest
that face recognition and categorization interact but other factors such as task difficulty may
also affect face recognition.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-146The effect of context in face and object recognition150171542248827IBülthoffTWolfIMThorntonTübingen, Germany2007-07-00109In the German population, men are on average 13 cm taller than women . Smaller people,
many of them women, look at other faces from below (viewing angle) while tall people look
at others from above. The minimal distance between 2 persons not engaged in mutual gaze
is around 50 cm . Thus, with regard to male and female average statures, in close-up situations,
the average viewing angle between males and females is around 13 deg. Do people
have therefore different “preferred” representations of faces depending on their stature? More
specifically, are tall and small people more efficient at processing face seen “from above” and
from “below” respectively? Furthermore, do observers have different “preferred” representations
of male and female faces because men are on average taller than women? To investigate
the influence of stature and sex on face recognition, we first investigated whether efficiency in a
sex classification task might be influenced by face orientation. To maximize stature differences
between participants, we tested two groups: small women (under 165cm) and tall men (over
180cm). If face representation is influenced by stature, we expect small women to be more
efficient (faster) at processing faces viewed as seen from below and vice-versa for tall men.
Furthermore, because of natural average stature differences between men and women, efficient
categorization of male and female faces might depend on their orientation. We used unfamiliar
male and female faces shown at pitch angles between -18 deg (looking downward) to +18 deg
(looking upward). We tested participants in a speeded sex classification task. Male and female
participants saw 220 faces one by one and had to classify them as male or female as fast as
possible. Classification accuracy was high (over 95%). Analysis of reaction times does not
show any relation between stature of observer, sex of shown face and its pitch orientation, thus
suggesting that face processing with regards to sex is not influenced predominantly by stature
of observer or sex of presented face.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-109Looking Down, Looking Up: Does Stature Influence Face Recognition?150171542248817RArmannIBülthoffTübingen, Germany2007-07-00108Knowing where people look in a face provides an objective insight onto the information entering
the visual system and into the cognitive processes involved in face perception. Eye-tracking
studies on face perception have mostly investigated observers’ viewing behavior when studying
single faces. However, in day-to-day situations, humans also compare faces or match a person’s
face to a photograph. During comparison, facial information remains visually accessible,
freeing observers from time and encoding constraints . Here, we recorded eye movements of
human participants while they compared two faces presented simultaneously. We used (i) two
different tasks (discrimination or categorization), and (ii) faces differing either in identity or in
sex. In addition, we varied (iii) task difficulty, i.e. the similarity of the two faces in a pair. Eye
movements to previously defined areas of interest (AOIs) on the faces were analyzed in terms
of frequency, duration and the temporal pattern of fixations made. We found that the eyes were
fixated most often in the discrimination tasks (37% of all fixations) but the nose in the categorization
task (34.5%), while the total number of fixations increased with task difficulty. Faces
differing in sex were more difficult to discriminate than faces differing in identity (63% versus
76% correct responses), which was also reflected in more fixations to face pairs differing in
sex (14.4 versus 11.8 fixations per trial). With increasing task difficulty, fixations to only some
AOIs increased, in accordance with the literature (more to the eyes in the sex and more over
all areas in the identity discrimination tasks; ). Unexpectedly, we found a striking effect of
tasks on performance measures, as over 80% of participants could detect the more feminine of
two faces (categorization task) even at the most similar level, but for the same face pairs their
performance in a discrimination task was less than 30% correct. Another interesting finding
is that observers mostly compared the inner halves of the two faces of a pair, instead of the
corresponding features (e.g., the left eye of the left face with the left eye of the right face). This
viewing behavior remained the same in a control experiment where participants’ head was not
fixed. Quite surprisingly, female participants fixated significantly more often the eyes of the
face stimuli than male participants, but only when the sex of the faces was a relevant feature in
the task.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-108Sex Matters When You Ask the Right Question: What Affects Eye Movements in Face Comparison Tasks?150171542246907RArmannIBülthoffSarasota, FL, USA2007-06-005Eye-tracking studies on face perception have mostly investigated observer's eye movement behavior when studying single faces. However, in day-to-day situations, humans also compare faces or try to match a person's face to a photograph. During comparison, facial information remains visually accessible. This frees observers from time and encoding constraints (Galpin & Underwood, 2005). Here, we present eye movement data of participants required to compare two faces that were presented side by side. We used (1) two different tasks (discrimination or categorization), and (2) two types of face stimuli: faces differing either in identity or in sex. In addition, we varied for (3) task difficulty i.e. the similarity of the two faces in a pair. Eye-fixations in predefined facial regions were recorded and analyzed, for example, with regards to their frequency and duration. Our findings reveal, for instance, that the eyes were fixated more often in the discrimination tasks (38% of all fixations) than in the categorization task (29%), while the total number of fixations increased significantly with increasing task difficulty (p [[lt]] 0.001 in all cases, N=20). Faces differing in sex were more difficult to discriminate than faces differing in identity (63 % versus 76 % correct responses), which was reflected by increased fixations to face pairs that differed in sex (14.4 versus 11.8 fixations per trial). Unexpectedly, we found a striking effect of tasks on performance measures, as over 80 % of participants could detect the more feminine of two faces (categorization task) even at the most similar level, but for the same face pairs their performance in a discrimination task was less than 30 % correct. Viewing behavior of male and female participants differed, but only when the sex of the faces was relevant for the task.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/Poster_VSS_2007_.pdfpublished-5Sex matters when you ask the right question: What affects eye movements in face comparison tasks?150171542250367CMichelBRossionWHaywardIBülthoffQVuongSarasota, FL, USA2007-06-007Both shape and surface dimensions play an important role in face (e.g. O'Toole et al., 1999) and race recognition (Hill et al., 1995). However, the relative contribution of these cues to other-race (OR) face recognition has not been investigated. Some facial properties may be diagnostic in one race but not in the other (e.g. Valentine, 1991). Observers of different races would rely on facial cues that are diagnostic for their own-race faces, a phenomenon which could partly explain our relative difficulty at recognizing OR faces at the individual level (the so-called other-race effect). Here, we tested this hypothesis by examining the relative role of shape and surface properties in the other-race effect (ORE). For this purpose, we used Asian and Caucasian faces from the MPI face database (Vetter & Blanz, 1999) so that we could vary both shape and surface information, only shape information (in which the surface texture was averaged across individual faces of the same race), or only surface information (in which shape was averaged). The ORE was measured in Asian and Caucasian participants using an old/new recognition task. When faces varied along both shape and surface dimensions, Asians and Caucasians showed a strong ORE (i.e. a better recognition performance for same- than other-race faces). With faces varying along only shape dimensions, the ORE was no longer observed in Asians, but remained present in Caucasians. Finally, when presented with faces varying only along surfacedimensions, the ORE was not found for Caucasians whereas it was present in Asians. These results suggest that the difficulty in recognizing OR faces for Asian observers can be partly due to their inability to discriminate among surface properties of OR faces, whereas ORE for Caucasian participants would be mainly due to their inability to discriminate among shape cues of OR faces.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-7The role of surface and shape information in the other-race face effect1501715422BulthoffN20067IBülthoffFNNewellSt. Petersburg2006-08-00204We had shown that memory for a face can be influenced by the distinctiveness of an utterance to which it has been associated (Bülthoff and Newell, 2004 Perception 33 Supplement, 108). Furthermore, recognition of a face can be primed by a paired utterance, suggesting that there is a tight, cross-modal coupling between visual and auditory stimuli and that face distinctiveness can be influenced by cross-modal interaction with auditory stimuli like utterances. When instrumental sounds are used instead of utterances, the perceptual quality of auditory stimuli seemed also to affect memory for faces. Here we further investigated whether instrumental sounds can also prime face recognition. Our results show that this is not the case; arbitrary auditory stimuli do not prime recognition of faces. This suggests that utterances are easier to associate closely with faces than arbitrary sounds. We also investigated whether the observed priming effect of utterances might have been based on the use of different first names in each utterance. We repeated the priming experiment using the same utterances, but name information was removed. A significant priming effect was observed. Thus the semantic information related to the first name is not decisive for the priming effect of utterances on face recognition.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-204Cross-modal interaction can modulate face distinctiveness150171542240597IBülthoffFNNewellSarasota, FL, USA2006-06-0010Our previous studies have shown that memory for a face can be affected by the distinctiveness of a voice to which it had been paired (Bülthoff & Newell, ECVP2004). Moreover, we showed that voices can prime face recognition, suggesting a tight, cross-modal coupling between both types of stimuli. Further investigations however, seemed to suggest that non person-related audio stimuli could also affect memory for faces. For example, faces that had been associated with distinctive instrumental sounds were indeed better recognized in an old/new task than faces paired to typical sounds. Here we investigated whether these arbitrary sounds can also prime face recognition. Our results suggest that arbitrary audio stimuli do not prime recognition of faces. This finding suggests that attentional differences may have resulted in better recognition performance for faces paired to distinctive sounds in the explicit old/new task. Voices are easier to associate closely to faces. We also investigated whethe
r the voice priming effect found earlier might be based on the use of different first names in each audio stimulus, that is, whether the effect was based on semantic rather than perceptual information. We repeated the priming experiment using the same voice stimuli, but name information was removed. The results show that there is still a significant priming effect of voices to faces, albeit weaker than in the full voice experiment. The semantic information related to the first name helps but is not be decisive for the priming effect of voices on face recognition.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-10Voices, not arbitrary sounds, prime the recognition of familiar faces150171542248297IBülthoffFNNewellTübingen, Germany2006-03-0072In this study we ask whether visually typical faces can become perceptually distinctive when
they are paired to auditory stimuli that are distinctive. In a first set of experiments (B¨ulthoff
& Newell, ECVP 2004), we had investigated the effect of voice distinctiveness on face recognition.
Memory for a face can be influenced by the distinctiveness of an utterance to which
it has been associated. Furthermore, recognition of a familiar face can be primed by a paired
utterance. These findings suggest that there is a tight, cross-modal coupling between the faces
presented and the associated utterances and that face distinctiveness can be influenced by crossmodal
interaction with auditory stimuli like voices. In another set of experiment, we used instrumental
sounds instead of voices and showed that arbitrary auditory stimuli could also affect
memory for faces. Faces that had been paired with distinctive instrumental sounds were better
recognized in an old/new task than faces paired to typical instrumental sounds. Here we
investigated whether these instrumental sounds can also prime face recognition although these
auditory stimuli are not associated to faces naturally as voices are. Our results suggest that this
is not the case; arbitrary audio stimuli do not prime recognition of faces. This finding suggests
that attentional differences may have resulted in better recognition performance for faces paired
to distinctive sounds in the old/new task. It also suggests that utterances are easier to associate
closely to faces than arbitrary sounds. In a last set of experiments we investigated whether the
voice priming effect shown in the first set of experiments might be based on the use of different
first names in each utterance. Thus, we asked whether semantic rather than perceptual information
was determinant in the used utterances. We repeated the priming experiment using the
same voice stimuli, but name information was removed. The results show that there is still a
significant priming effect of voices to faces, albeit weaker than in the full voice experiment.
The semantic information related to the first name helps but is not be decisive for the priming
effect of voices on face recognition.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-72Face Distinctiveness can be Modulated by Cross-Modal Interaction with Auditory Stimuli150171542233017IBülthoffTübingen, Germany2005-02-00124It is known that we are quite accurate at judging the sex of unfamiliar faces . Furthermore sex categorization is performed more rapidly, on average, than familiarity or identity decisions . In one of our recent studies on face perception, with unfamiliar faces  we were surprised
to find that discrimination performance was much lower for faces differing in sex quality than when the facial features were morphed between two identities. Here, we investigated if this observation holds also for familiar faces. The motivation for this series of experiments was to
find out if memory of familiar faces was showing similar differences; participants being more inaccurate when they had to remember the specific feminity or masculinity of a well known face than when identity-related changes of facial features were involved. Participants had to
identify the veridical faces of familiar work colleagues among ten distractor faces that were morphing variations of the original faces. Distractor faces varied either in identity, caricature or sex. In the identity face sets, distractor faces were morphs between the original face and
unfamiliar faces mixed in different proportions. In the caricature face sets, distractors were different caricatures of the original face. Finally, in the sex face sets, distractor faces were different feminized and masculinized versions of the veridical face. Participants performed best when the original face was presented among identity distractors. They had a tendency to choose feature enhancing caricatures over the original faces in caricature sets. Participants were very poor at finding the original faces in the sex sets. Generally our findings with unfamiliar faces show that sex-related changes in facial features are less obvious to the observers than
identity-related changes. Furthermore our study on familiar faces suggests that we do not retain sex-related facial information in memory as accurately as identity-related information. These results have important implications for models of face recognition and how facial features are
represented in the brain.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-124Sensitivity to changes in identity, caricature and sex in face recognition150171542230677IBülthoffFNNewellBudapest, Hungary2004-09-00108We can recognise distinctive faces more easily than typical ones. We investigated whether this distinctiveness effect appears for visually typical faces when these faces have been associated with features that are distinctive in another sensory modality. Participants first learned a set of unfamiliar faces. During learning, half of these faces were paired with distinctive auditory stimuli and half with typical stimuli. In experiment 1, the auditory stimuli were voices. We found that recognition performance in a visual recognition test was significantly (p < 0.005) better for faces that had been paired with distinctive voices. In experiment 2, we tested whether voice information improved face recognition directly by association or whether distinctiveness effects were due to enhanced attention during learning. In a priming experiment, participants recognised a face significantly faster (p <0.05) when this face was preceded by its congruent voice. Thus the quality of auditory information can affect recognition in another modality like vision. In experiment 3, the stimuli consisted of non-speech sounds. In this experiment, we tested whether voices and faces represent a special case of cross-modal memory enhancement or whether this distinctiveness effect occurs also with more arbitrary associations. Recognition performance in a visual recognition test suggests that a similar effect is present.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-108Interactions between audition and vision for face recognition150171542226317IBülthoffRLKlatzkyFNNewellTübingen, Germany2004-02-00122Studies of visual size perception with the method of magnitude estimation have shown a linear
relationship between actual sizes and magnitude estimates . Similar studies for touch do
not yield unequivocal evidence for a linear relationship; in some cases, a positively accelerated
power function described best the relationship between stimulus sizes and estimates .
We have investigated haptic magnitude estimation for length in two haptic experiments with
different methods of haptic exploration (whole hand, nger span).
The haptic stimuli consisted of 15 rectangular shapes. The only difference from one shape
to another was the length of the horizontal side, which ranged from 40 mm to 68 mm in equal
intervals. For all shapes, the depth and height were 10 mm and 40 mm, respectively.
In the Multiple cues Experiment, blindfolded participants used their dominant hand to feel
each shape freely. The shape was presented xed at onto a support, so they could feel the
entire shape under their hand. The participants' task was to give a modulus-free magnitude
estimate for the horizontal side. All shapes were presented once in random order in each block.
In the Single cue Experiment, blindfolded participants were restricted to grasping the horizontal
side of a shape between the thumb and index nger of their dominant hand. Their task
was to give a magnitude estimate for the length of that side.
Magnitude estimates for side length could be tted by a two-parameter linear function with
a high goodness-of-t statistic in both experiments (R2
.97). Thus, when participants were
given a size range of 40 to 68 mm, their magnitude estimates increased linearly with each
physical increment, independently of the exploration method used.
Because of the small range of total size variation present in the shape set, we do not conclude
from our results that haptic magnitude estimation of unidimensional size is generally
linear. It should be noted that the present linear functions had a negative y-intercept and that
when a power function was t to the data, the exponent was greater than 1.0 in both experiments,
and goodness-of-t was also high. Our results suggest, however, that haptic perception
of size can safely be considered linear within this small part of the size continuum. These results
are important for considering further psychophysical studies with shapes within this size
range.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-122Haptic Magnitude Estimates of Size for Graspable Shapes150171542226307IBülthoffFNNewellHHBülthoffVancouver, Canada2003-11-0057Face studies have shown that distinctive faces are more easily recognized than typical faces in memory tasks. We investigated whether a cross-modal interaction between auditory and visual stimuli exists for face distinctiveness. During training, participants were presented with faces from two sets. In one set all faces were accompanied by characteristic auditory stimuli (d-faces). In the other set, all faces were accompanied by typical auditory stimuli (s-faces). Face stimuli were counterbalanced across auditory conditions. We measured recognition performance in an old/new recognition task. Face recognition alone was tested. Our results show that participants were significantly better (t(12) = 3.89, p< 0.005) at recognizing d-faces than s-faces in the test session. These results show that there is an interaction between different sensory inputs and that typicality of stimuli in one modality can be modified by concomitantly presented stimuli in other sensory modalities.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-57Interaction between vision and audition in face recognition1501715422BulthoffN2003_27IBülthoffFNNewellSarasota, FL, USA2003-10-00825Many face studies have shown that in memory tasks, distinctive faces are more easily recognized than typical faces. All these studies were performed with visual information only. We investigated whether a cross-modal interaction between auditory and visual stimuli exists for face distinctiveness. Our experimental question was: Can visually typical faces become perceptually distinctive when they are accompanied by voice stimuli that are distinctive? In a training session, participants were presented with faces from two sets. In one set all faces were accompanied by characteristic auditory stimuli during learning (d-faces: different languages, intonations, accents, etc.). In the other set, all faces were accompanied by typical auditory stimuli during learning(s-faces: same words, same language). Face stimuli were counterbalanced across auditory conditions. We measured recognition performance in an old/new recognition task. Face recognition alone was tested. Our results show that participants were significantly better (t(12) = 3.89, p< 0.005) at recognizing d-faces than s-faces in the test session. These results show that there is an interaction between different sensory inputs and that typicality of stimuli in one modality can be modified by concomitantly presented stimuli in other sensory modalities.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-825Interaction between vision and speech in face recognition1501715422BulthoffN20037IBülthoffFNNewellTübingen, Germany2003-02-00147Various factors have been identied that in
uence face recognition. Despite the diversity
of the studies on face recognition, mostly factors related to visual information have
been investigated so far. Among factors like facial motion, orientation and illumination,
the distinctiveness of faces has been extensively studied. It is well known that
distinctive faces are more easily recognized than typical faces in memory tasks. In our
study we have addressed the question whether factors that are not of visual nature
might also in
uence face recognition. More specically, our experimental question was:
can visually typical faces become perceptually distinctive when they are accompanied
by voice stimuli that are distinctive and can these faces therefore become in this way
more easily recognizable? In a training session, participants saw faces from two sets.
In one set all faces were accompanied by characteristic auditory stimuli during learning
(d-faces: dierent languages, intonations, accents, etc.). In the other set, all faces were
accompanied by typical auditory stimuli during learning(s-faces: same words, same language).
Face stimuli were counterbalanced across auditory conditions. Face recognition
alone was tested. We measured recognition performance in an old/new recognition task.
Our results show that participants were signicantly better (t(12) = 3.89, p< 0.005) at
recognizing d-faces than s-faces in the test session. Thus, our results demonstrate the
perceptual quality of auditory stimuli (distinctive or typical) presented simultaneously
with face stimuli can modify face recognition performance in a subsequent memory
task and that typicality of stimuli in one modality can be modied by concomitantly
presented stimuli in other sensory modalities.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-147Cross-modal Aspect of Face Distinctiveness1501715422Bulthoff20027IBülthoffSarasota, FL, USA2002-11-00620Faces are easily categorized as male or female. But is this categorization done at the perceptual level? In previous studies (ECVP 2001), we found no categorical perception of gender for face stimuli using two discrimination tasks: either simultaneous same-different task or delayed matching-to-sample. This conflicts with results of another study using a different task (Campanella et al, Visual Cognition, 2001). Here we tested whether categorical perception of gender might become apparent if we used a discrimination task (sequential same-different task) more similar to that used by Campanella et al. We employed the same type of stimuli as in our previous experiments. The face stimuli were created by generating series of morphs between pairs of male and female 3D faces (gender continua). We also generated a gender continuum based on an average face. While gender-related information was present in this latter continuum, the stimuli lacked individual characteristic facial features that might induce identity-related categorical perception. If male and female faces belong to perceptually distinct gender categories, we would expect that two faces that straddle the gender boundary are more easily discriminated than two faces that belonged to the same gender category. In our previous experiments we never found any evidence of categorical perception for unfamiliar faces. Our present results confirm these findings. We found no evidence that participants could discriminate more easily between faces that straddle the gender category. Thus no categorical effect for face gender was revealed when sequential same-different discrimination task was used. The conflicting results obtained by both studies do not appear to be due to the different discrimination tasks employed.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-620No categorical perception of face gender found with different discrimination tasks150171542211317IBülthoffTübingen, Germany2002-02-0084In previous studies, we investigated whether male and female faces are perceived as distinct categories at the perceptual level and found no evidence of categorical perception using various discrimination tasks. In the present study we tested whether categorical perception of our stimuli might become apparent with yet another discrimination task, a sequential same-different task. The face stimuli used in all our experiments were derived from a database of 200 3D-laser scans of male and female faces (http://faces.kyb.tuebingen.mpg.de). Series of 3D-morphs were computed between individual male and female faces using the method of Blanz & Vetter (1999). Additionally, all faces of the database were used to compute average male and female faces to generate another series of morphs which was devoid of any individual features. One prediction of categorical perception is that two face stimuli that belong to different gender categories should be easier to discriminate than two face stimuli belonging to the same gender. In all our studies including the present one, most face pairs that straddle the gender category were not more easily discriminated than same category pairs. Thus, despite the use of different discrimination tasks, we found no categorical effect for face gender with our face stimuli, even when exemplar specific effects are eliminated as it is the case with average faces. We will discuss these results and compare them to the conflicting results of Campanella et al. (2001) who carried out similar experiments with different morphing techniquesnonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-84Face gender is not perceived categorically150171542211117BKnappmeyerCTappeIBülthoffTübingen, Germany2002-02-0083Caricatured faces are recognized as quickly and accurately as (and sometimes faster and
better than) the veridical versions (Benson & Perrett, 1994). This “caricature effect” (CE)
has been demonstrated only for the frontal view of faces and only when the caricatures
were presented during the testing phase. First, we investigated whether the caricature
effect generalizes across changes in viewpoint (frontal, three-quarter, and profile). Second,
we examined the effect of presenting caricatured faces during the learning phase,
which we term the “reverse caricature effect” (RCE). Face recognition performance was
tested using two tasks: an old/new recognition paradigm and a sequential matching task.
Observers learned faces either in the frontal, three-quarter, or profile views, and were
tested with all three viewpoints. Half of the subjects participated in the CE condition
(learning with veridicals, testing with caricatures) and the other half of the subjects participated
in the RCE condition (learning with caricatures, testing with veridicals). The
caricatures were created using a 3D face morphing algorithm (Blanz & Vetter, 1999).
Accuracy sensitivity was measured using d’. For the CE condition, caricatures were recognized
more accurately than veridical versions of the same face (mean d’: sequential
matching: caricature=1.15, veridical=1.09; Old/New: caricature=1.42, veridical=1.18).
This difference was (nearly) significant (sequential matching: F(2,58)=28, p<0.001; Old/
New: F(1, 162)=3.19, p=0.076). The interaction between face caricature level and viewpoint
(testing view and/or learning view) was not significant. This suggests that the caricature
effect generalizes across viewpoint. Similar results were found for the RCE condition.
These results are discussed within the framework of a face space model for
representing faces.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/pdf1111.pdfpublished-83Recognizing faces across different views: does caricaturing help?150171542211337IBülthoffFNNewellSarasota, FL, USA2001-12-00281Background: Our visual system uses a sophisticated mechanism called categorical perception to discriminate between highly similar objects. Small perceptual differences are enhanced thus creating clear boundaries between groups of items. Purpose: Although it seems to be an easy task to classify people by gender, we wondered whether facial information was sufficient for this purpose. Using the morphing technique of Blanz and Vetter (1999) we built an average three-dimensional head model from a database of 200 laser-scanned faces. We constructed an artificial gender continuum of this average head and used the faces in categorization and discrimination experiments. Results: Gender information was present in our face set and was easily identified by the participants. However when we tested for the existence of a categorical effect, we found no evidence of enhanced discrimination for faces straddling the gender category boundary. In previous studies we found also no evidence of categorical perception when using faces of individuals (Buelthoff & Newell, 2000). Our results with average faces confirm the previous findings and avoid any personal distinctive features that might interfere with the analysis. Furthermore, the use of average faces insures to have endpoint faces situated at approximately equal distance from the gender boundary. Conclusion: The absence of a categorical effect is surprising. Categorical perception has been shown repeatedly for other information displayed by faces (expressions and identity). Although we can tell quite reliably the sex of a face, there is no evidence of a distorted perceptual space for face gender. Furthermore our results show that categorical perception does not always exist when similar items are categorized, not even for an important category like faces. Clearly, despite its enormous importance for social interactions we have not learned to deal with the gender of faces very effectively.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/pdf1133.pdfpublished-281Gender, average heads and categorical perception150171542211347IBülthoffFNNewellKuşadasi, Turkey2001-08-0054Categorical perception is a sophisticated mechanism which allows our visual system to discriminate between highly similar objects. Perceptually, physical differences between groups of objects are enhanced as compared to equal-sized differences within a group of objects, thus creating clear boundaries between groups of items. Humans are expert in face recognition. Does a categorical perception mechanism help us to differentiate between male and female faces?
Using a three-dimensional morphing technique, we built an average.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-54Average faces and gender categories: no evidence of categorical perception150171542211147CYChengBKnappmeyerIBülthoffNew Orleans, LA, USA2000-11-16The finding that caricatures are recognized more quickly and accurately than veridical faces has been demonstrated only for frontal views of human faces (e.g., Benson & Perrett, 1994). In the present study, we investigated whether there is also a caricature effect for three-quarter and profile views. Furthermore, we examined what happens to the caricature advantage when generalizing across view changes. We applied a 3D caricature algorithm to laser scanned head models. In a sequential matching task, we systematically varied the view of the target faces (left/right profile, left/right three-quarter, full-face), the view of the test faces (left/right profile, left/right threequarter, fullface) and the face type (anticaricature, veridical, caricature). The caricature effect was replicated for frontal views. We also found a clear caricature advantage for three-quarter and profile views. When generalizing across views, the caricature advantage was present for the majority of view change conditions. In a few conditions, there was an anticaricature advantage.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0The caricature effect across viewpoint changes in face perception15017154221097IBülthoffFNNewellGroningen, Netherlands2000-08-0057We could find no evidence for categorical perception of face gender using unfamiliar human faces (I Bülthoff et al, 1998 Perception 27 Supplement, 127a). Therefore we have investigated whether familiarising participants with the stimuli prior to testing might favour categorical perception.
We created artificial gender continua using 3-D morphs between laser-scanned heads. The observers had to classify all faces according to their gender in a classification task. If perception of face gender is categorical, we would expect participants to classify the morphs into two distinct gender categories. Furthermore, they should differentiate pairs of morphs that straddle the gender boundary more accurately than other pairs in a discrimination task. The participants were familiarised before testing with half of the faces used for creating the morphs. They could categorise most familiar and unfamiliar faces into distinctive gender categories. Thus, they could extract the gender information and use it to classify the images. On the other hand, we found no evidence of increased discriminability for the morph pairs that straddle the gender boundary. Apparently, observers did not perceive the gender of a face categorically, even when these faces were familiar to them.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/pdf109.pdfpublished-57Investigating categorical perception of gender with 3-D morphs of familiar faces15017154221107IBülthoffFNNewellFort Lauderdale, FL, USA2000-05-00S225nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/pdf110.pdfpublished0There is no categorical effect for the discrimination of face gender using 3D-morphs of laser scans of heads15017154222877IBülthoffFNNewellTVetterTübingen, Germany1999-02-0052Zeigt die Bestimmung der Geschlechtszugehörigkeit von Gesichtern die charakteristischen Merkmale der kategorischen Wahrnehmung?
Durch ein automatisiertes 3D-Morph-Verfahren wurden aus 3D-Laser-scans von männlichen und weiblichen Köpfen Misch-Gesichter synthetisiert. Das Morph-Verfahren erlaubt sowohl die Textur als auch die Form eines Gesichtes zu verändern, so daß Pigmentation und Form zwischen männlichen und weiblichen Gesichtern kontinuierlich angepaßt werden können. Andere geschlechtsspezifische Merkmale wie Frisur, Bart, Make-up oder Schmuck wurden weggelassen oder computergraphisch entfernt. Alle Gesichter wurden in frontaler oder seitlicher Ansicht (3/4-view) mit neutralem Gesichtsausdruck präsentiert. Versuchspersonen haben zuerst eine Diskriminationsaufgabe (XAB-Test) durchgeführt und danach wurde die subjektive Geschlechtsgrenze entlang des Morph-Kontinuums in einer Kategorisierungsaufgabe bestimmt.
Es zeigte sich für alle Versuchspersonen die typische Stufenfunktion in der Kategorisierungsaufgabe. Im XAB-Test war es jedoch für die Versuchspersonen nicht einfacher, ein Gesichtspaar zu unterscheiden, das durch die putative kategorische Geschlechtsgrenze getrennt war als für Gesichtspaare an dem mehr weiblichen oder männlichen Ende des Morph-Kontinuums.
Unsere Experimente zeigen, daß das Geschlecht eines Gesichts nicht kategorisch wahrgenommen wird.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/pdf287.pdfpublished-52Geschlechtswahrnehmung von Gesichtern, die durch 3D-Morph-Verfahren erzeugt wurden1501715422BulthoffNVB19987IBülthoffFNNewellTVetterHHBülthoffOxford, UK1998-08-00127We investigated whether the judgment of face gender shows the typical characteristics of categorical perception. As stimuli we used images of morphs created between pairs of male/female 3-D head laser scans. In experiment 1, texture and shape were morphed between both faces. In experiment 2, either the average texture of all faces was mapped onto the shape continuum between the two faces or we mapped the texture continuum between each face pair onto an average shape face. Thus, either the shape or the texture remained constant in any one condition. The subjects viewed these morphs first in a discrimination task (XAB) and then in a categorisation task which was used to locate the subjective gender boundary between each male/female face pair. Although we found that subjects could categorise the face images by their gender in the categorisation task and that texture alone is a better gender indicator than shape alone, the subjects did not discriminate more easily between face images situated at the category boundary in any of our discrimination experiments. We argue that we do not perceive the gender of a face categorically and that more cues are needed to decide the gender of a person than those provided by the faces only.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-127Gender perception of 3-D head laser scans150171542211237IBülthoffFNNewellTVetterHHBülthoffOxford, UK1998-08-00127We investigated whether the judgment of face gender shows the typical characteristics of categorical perception. As stimuli we used images of morphs created between pairs of male/female 3-D head laser scans. In experiment 1, texture and shape were morphed between both faces. In experiment 2, either the average texture of all faces was mapped onto the shape continuum between the two faces or we mapped the texture continuum between each face pair onto an average shape face. Thus, either the shape or the texture remained constant in any one condition. The subjects viewed these morphs first in a discrimination task (XAB) and then in a categorisation task which was used to locate the subjective gender boundary between each male/female face pair. Although we found that subjects could categorise the face images by their gender in the categorisation task and that texture alone is a better gender indicator than shape alone, the subjects did not discriminate more easily between face images situated at the category boundary in any of our discrimination experiments. We argue that we do not perceive the gender of a face categorically and that more cues are needed to decide the gender of a person than those provided by the faces only.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-127Gender perception of 3D head laser scans150171542210657MAPavlovaIBülthoffANSokolovOxford, UK1998-08-00123Recently, we showed that recovery of a priori known structure from biological motion leveled off with changing display orientation (eg Pavlova and Sokolov, 1997 Perception 26 Supplement, 92). How does image-plane rotation of a prime affect detection of a camouflaged point-light walker? At each of five randomly presented display orientations between upright and inverted (0°, 45°, 90°, 135°, and 180°), viewers saw a sequence of displays (each display for 1 s). Half of them comprised a camouflaged point-light walker, and half a 'scrambled-walker' mask. In a confidence-rating procedure, observers judged whether a walker was present. Prior to each experimental sequence, they were primed (for 10 s) either with an upright-, 45°-, 90°-, or 180°-oriented sample of the walker. Pronounced priming effects were found only with an upright-oriented prime: it improved detectability for the same-oriented displays, and to a lesser extent for 45°. With 45°-prime, sensitivity for 0°-, 45°-, and 90°-oriented displays was higher than for 135° and 180°. However, with 90°- and 180°-primes ROC curves for all orientations were situated close to one another. These findings indicate that the priming effect in biological motion is partly independent of the relative orientation of priming and primed displays. Moreover, it occurs only if a prime corresponds to a limited range of deviations from upright orientation within which display is spontaneously recognisable despite a discrepancy between event kinematics and dynamics (Pavlova, 1996 Perception 25 Supplement, 6). The primacy of dynamic constraints in the perception of structure from biological motion is discussed.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-123Perception of a camouflaged point-light walker: a differential priming effect150171542211257FNNewellIBülthoffTVetterHHBülthoffFort Lauderdale, FL, USA1998-05-00173nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-173Effects of shape and texture on the perceptual categorization of gender in faces150171542211247IBülthoffFNNewellTVetterHHBülthoffFort Lauderdale, FL, USA1998-05-00171nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-171Is the gender of a face categorically perceived?15017154222837MAPavlovaANSokolovIBülthoffTübingen, Germany1998-02-00120In spite of potential perceptual ambiguity of a point-light walking figure, with upright display orientation observers can readily recover the invariant structure from biological
motion. However, regardless of the same low-level relations between moving dots within upright and inverted orientation, perception of a point-light walker is dramatically impeded with 180º-display inversion. Spontaneous recognition was found to improve abruptly with changing display orientation from inverted to upright (Pavlova, 1996, Perception 25, Suppl.). This evidence implies that the visual system implements additional
processing constraints for the unambiguous interpretation of biological motion.
We used a masking paradigm to study the processing constraints in biological motion perception. At each of randomly presented five orientations (0°, 45°, 90°, 135°, and 180°), viewers saw a sequence of 210 displays. Half of them comprised a canonical 11 pointlight walker, and half a partly distorted walker, in which rigid pair-wise connections between moving dots were perturbed. A 66-dot “scrambled-walker” mask camouflaged both figures. Prior each experimental sequence, a sample of a canonical walker in respective orientation was demonstrated. Observers judged whether a canonical figure was present. A jackknife estimating of the ROC parameters indicated that detectability leveled off with changing orientation from upright to 135°, and then slightly increased to display inversion. However, even with 135° and 180° it was above chance. For orientations 0°, 45° and 90°, perceptual learning to detect a canonical walker proceeded rather
rapidly in the course of the experiment.
Comparison with the data on spontaneous recognition of biological motion suggests that display orientation affects bottom-up processing of biological motion more strongly than
We suppose that some processing constraints (such as axis-of-symmetry, dynamic constraints) in perception of biological motion be hierarchically nested. Dynamic constraints appear to be the most powerful: the highest detectability was found with upright orientation.
While with changing orientation these constraints lose their strength, others processing constraints are getting more influential. For instance, the lower sensitivity for 135° as compared to 180° might be accounted for by the axis-of-symmetry constraint that is implemented by the visual system at 180°. Likewise, due to the inefficiency of this constraint, biological motion pattern is perceived as more multistable with 90°-150°, as compared to 180° display orientation.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-120Masking a point-light walker15017154225687SEdelmanHHBülthoffIBülthoffFort Lauderdale, FL, USA1996-04-00S1125Purpose. The dimensions of the representation space of 3D objects may be independent, if nonaccidental - generic or qualitative shape contrasts serve as the distinguishing features. Alternatively, the dimensions can be interdependent, as predicted by some theories that postulate metric feature-space representations. To explore this issue, we studied human performance in forced-choice classification of objects composed of 4 geon-like parts, emanating from a common center. Methods. The two class prototypes were distinguished by qualitative contrasts (cross-section shape; bulge/waist), and by metric parameters (degree of bulge/waist, taper ratio). Subjects were trained to discriminate between the two prototypes (shown briefly, from a number of viewpoints, in stereo) in a 1-interval forced-choice task, until they reached a 90% correct-response performance level. Subsequent trials involved both original and modified versions of the prototypes; the latter were obtained by varying the metric parameters both orthogonally (ORTHO) and in parallel (PARA) to the line connecting the prototypes in the parameter space. Results. 8 out of 11 subjects succeeded to learn the task within the allotted time. For these subjects, the error rates increased progressively with the parameter-space displacement between the stimulus and the corresponding prototype. The effect of ORTHO displacement was significant: F(1, 68) = 3.6, p < 0.06. There was also a hint of a marginal PARA displacement effect: F(1, 68) = 1.9, p = 0.17 Conclusions. Theories that postulate exclusive reliance on qualitative contrasts (such as Biederman's Recognition By Components) predict near-perfect discrimination performance for stimuli derived from the prototypes both by PARA and by ORTHO parameter-space displacement. Our results contradict this prediction, and support the notion of a metric representation space, in which any displacement away from the familiar region incurs performance costs.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Interdependence of feature dimensions in the representation of 3D objects15017154225667IBülthoffPSinhaHHBülthoffFort Lauderdale, FL, USA1996-04-001125Purpose. Last year we demonstrated that the recognition of biological motion sequences is consistent with a view-based recognition framework. We found that anomalies in the depth structure of 3D objects had an intriguing lack of influence on subject ratings of its figural goodness. In the present work, we attempt to explain this result by showing a strong top-down influence from high-level vision (object recognition) on early vision (stereoscopic depth perception). Methods. We used biological motion sequences of the kind first described by Johansson (Percep. & Psychophysics, 14, 201-211, 1973) to study the perception of 3D structure of human-like versus randomly moving dots displayed in stereo. The depth structure of the human sequence was altered by adding controlled amounts of depth noise (that left the 2D projections largely unchanged). "Random" sequences were created by adding x-y positional noise to the "Human" sequences. In a 2AFC task, participants had to decide whether 3 randomly chosen dots from stereoscopically displayed dot motion sequence appeared at the same distance from the observer. Results. Subject performance was significantly (p < 0.005) better with "random" sequences than with "human" ones. In a human sequence triples drawn from the same limb were often perceived as being in one depth plane irrespective of their actual "distorted" 3D configuration. Conclusions. Those results indicate the existence of top-down object-specific influences that suppress the perception of deviations from the expected 3D structure in a motion sequence. The absence of such an influence for novel structures might account for subjects' better performance with the random sequences.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-1125Top-down influence of recognition on stereoscopic depth perception150171542211207IBülthoffPSinhaTübingen, Germany1995-08-00112nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-112Recognizing biological motion sequences150171542211227PSinhaHHBülthoffIBülthoffFort Lauderdale, FL, USA1995-05-00S417nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0View-based representations for biological motion sequences15017154226807IBülthoffDKerstenHHBülthoffSarasota, FL, USA1994-05-001741nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-1741General lighting can overcome accidental viewing15017154228377HHBülthoffIBülthoffSarasota, FL, USA1985-05-0056nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-56Pharmacological inversion of directional specificity in movement detectors15017154238447HHBülthoffIBülthoffWien, Austria1985-05-00223nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/umkehrung_der_bewegungs_und_objektwahrnehmung_durch_einen_gaba_antagonisten_bei_fliegen_844.pdfpublished-223Umkehrung der Bewegungs- und Objektwahrnehmung durch einen GABA-Antagonisten bei Fliegen15017154238407HHBülthoffIBülthoffASchmidGiessen, Germany1984-06-00276nonotspecifiedhttp://www.kyb.tuebingen.mpg.de//fileadmin/user_upload/files/publications/beeinflussung_der_bewegungsdetektion_durch_neuropharmaka_840.pdfpublished-276Beeinflussung der Bewegungsdetektion durch Neuropharmaka1501715423SchultzKPDBFBGB201710JSchultzKKaulardPPilzKDobsIBülthoffAFernandez-CruzBBrockhausJGardnerHHBülthoffBulthoffN201710IBülthoffFNNewellZhaoB2016_310MZhaoIBülthoffBulthoff2014_1010IBülthoffBulthoffZ201410IBülthoffMZhaoBulthoff201410IBülthoffBulthoff2013_1010IBülthoffZhaoB2013_310MZhaoIBülthoffZhao201310MZhaoIBülthoffBulthoff2013_610IBülthoffBulthoff2012_1510IBülthoffKimESBW201210BRKimJEsinsJSchultzIBülthoffCWallravenBulthoff2012_1310IBülthoffBulthoff2012_1410IBülthoffBulthoffAWB201110RKLeeIBülthoffRArmannCWallravenHHBülthoffBulthoff2011_1010IBülthoff672610RGMArmannLJefferyAJCalderIBülthoffGRhodesBulthoff2010_210IBülthoffBulthoff201010IBülthoffArmannB200810RArmannIBülthoff547110NGaissertCWallravenIBülthoff381510IBülthoff359010IBülthoffFNewell381310IBülthoff306610IBülthoffFNNewell112110HHBülthoffSEdelmanIBülthoff