Fademrecht20171LFademrechtLogos VerlagBerlin, Germany2017-00-00Humans are social beings that interact with others in their surroundings. In a public space, for example on a train platform, one can observe the wide array of social actions humans express in their daily lives. There are for instance people hugging each other, waving to one another or shaking hands. A large part of our social behavior consists of carrying out such social actions and the recognition of those actions facilitates our interactions with other people. Therefore, action recognition has become more and more popular as a research topic over the years. Actions do not only appear at our point of fixation but also in the peripheral visual field. The current Ph.D. thesis aims at understanding action recognition in the human central and peripheral vision.
To this end, action recognition processes have been investigated under more naturalistic conditions than has been done so far. This thesis extends the knowledge about action recognition processes into more realistic scenarios and the far visual periphery. In four studies, life size action stimuli were used (I) to examine the action categorization abilities of central and peripheral vision, (II) to investigate the viewpoint-dependency of peripheral action representations, (III) to behaviorally measure the perceptive field sizes of action sensitive channels and (IV) to investigate the influence of additional actors in the visual scene on action recognition processes. The main results of the different studies can be summarized as follows.
In Study I a high categorization performance for social actions throughout the visual field with a nonlinear performance decline towards the visual periphery was shown. Study II revealed a viewpoint-dependence of action recognition only in far visual periphery. In Study III large perceptive fields for action recognition were measured that decrease in size towards the periphery. And in Study IV no influence of a surrounding crowd of people on the recognition of actions in central vision and the visual periphery was shown. In sum, this thesis provides evidence that the abilities of peripheral vision have been underestimated and that peripheral vision might play a more important role in daily life than merely triggering gaze saccades to events in our environment.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published143Action Recognition in the Visual Periphery1501715422FademrechtBd20173LFademrechtIBülthoffSde la Rosa2017-06-0013510–15Recognizing actions of others across the whole visual field is required for social interactions. In a previous study, we have shown that recognition is very good even when life size avatars who were facing the observer carried out actions (e.g. waving) were presented very far away from the fovea (Fademrecht, Bülthoff, & de la Rosa, 2016). We explored the possibility whether this remarkable performance was owed to life size avatars facing the observer, which - according to some social cognitive theories (e.g. Schilbach et al., 2013) - could potentially activate different social perceptual processes as profile facing avatars. Participants therefore viewed a life-size stick-figure avatar that carried out motion-captured social actions (greeting actions: handshake, hugging, waving; attacking actions: slapping, punching and kicking) in frontal and profile view. Participants' task was to identify the actions as 'greeting' or as 'attack' or to assess the emotional valence of the actions. While recognition accuracy for frontal and profile views did not differ, reaction times were significantly faster in general for profile views (i. e. the moving avatar was seen profile on) than for frontal views (i.e. the action was directed toward the observer). Our results suggest that the remarkable well action recognition performance in the visual periphery was not owed to a more socially engaging front facing view. Although action recognition seems to depend on viewpoint, action recognition in general remains remarkable accurate even far into the visual periphery.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-10Action recognition is viewpoint-dependent in the visual periphery1501715422FademrechtBd20163LFademrechtIBülthoffSde la Rosa2016-02-003:3316114Recognizing whether the gestures of somebody mean a greeting or a threat is crucial for social interactions. In real life, action recognition occurs over the entire visual field. In contrast, much of the previous research on action recognition has primarily focused on central vision. Here our goal is to examine what can be perceived about an action outside of foveal vision. Specifically, we probed the valence as well as first level and second level recognition of social actions (handshake, hugging, waving, punching, slapping, and kicking) at 0° (fovea/fixation), 15°, 30°, 45°, and 60° of eccentricity with dynamic (Experiment 1) and dynamic and static (Experiment 2) actions. To assess peripheral vision under conditions of good ecological validity, these actions were carried out by a life-size human stick figure on a large screen. In both experiments, recognition performance was surprisingly high (more than 66% correct) up to 30° of eccentricity for all recognition tasks and followed a nonlinear decline with increasing eccentricities.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published13Action recognition in the visual periphery1501715422FademrechtBd2016_27LFademrechtHHBülthoffSde la RosaBarcelona, Spain2016-08-2965A central question in visual neuroscience concerns the degree to which visual representations of
actions are used for action execution. Previously, we have shown that during simultaneous action
observation and action execution, visual action recognition relies on visual but not motor
processes. This research suggests a primacy of visual processes in social interaction scenarios.
Here, we provide further evidence for visual processes dominating perception and action in social
interactions. We examined the influence of visual processes on motor control. 16 participants
were tested in a 3D virtual environment setup. Participants were visually adapted to an action (fist
bump or punch) and subsequently categorized an ambiguous morphed action as either fist bump
or punch in three experimental conditions. In the first condition participants responded via key
press after having seen the entire test stimulus. In the second, participants responded with carrying
out the complementary action after having seen the entire test stimulus. In the third (social
interaction condition) participants carried out the complementary action while observing the
test stimulus. We found an antagonistic bias of movement trajectories towards the non-adapted
action (adaptation aftereffect) only in the social interaction condition. Our results highlight the
importance of visual processes in social interactions.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-65Visual processes dominate perception and action during social interactions1501715422MeilingerSHCSFd20167TMeilingerMStrickrodtTHintereckerD-SChangASaultonLFademrechtSde la RosaTübingen, Germany2016-07-27The goal of social and spatial cognition is the understanding of human behavior when humans interact with their natural social and spatial environment. In contrast to this, many studies in the field examine social and spatial cognition under controlled but artificial
conditions in which participants are passive observers rather than active agents. Here we present several projects in which we use virtual reality to increase the naturalness of the experimental testing conditions, while keeping the experimental set up under high experimental control. Due to the use of virtual reality and related techniques participants are able to naturally interact with their environment (e.g. walk through spaces, high five with an avatar) while we alter the visual stimuli in real-time in response to their behavior by means of motion tracking. Using this approach we combine experimental rigor with
increased ecological validity to learn about the cognitive processes actualy taking place in life.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Using Virtual Reality to Examine Social and Spatial Cognition1501715422FademrechtNBBd20167LFademrechtJNieuwenhuisIBülthoffNBarracloughSde la RosaSt. Pete Beach, FL, USA2016-05-14280In real life humans need to recognize actions even if the actor is surrounded by a crowd of people but little is known about action recognition in cluttered environments. In the current study, we investigated whether a crowd influences action recognition with an adaptation paradigm. Using life-sized moving stimuli presented on a panoramic display, 16 participants adapted to either a hug or a clap action and subsequently viewed an ambiguous test stimulus (a morph between both adaptors). The task was to categorize the test stimulus as either ‘clap’ or ‘hug’. The change in perception of the ambiguous action due to adaptation is referred to as an ‘adaptation aftereffect’. We tested the influence of a cluttered background (a crowd of people) on the adaptation aftereffect under three experimental conditions: ‘no crowd’, ‘static crowd’ and ‘moving crowd’. Additionally, we tested the adaptation effect at 0° and 40° eccentricity. Participants showed a significant adaptation aftereffect at both eccentricities (p < .001). The results reveal that the presence of a crowd (static or moving) has no influence on the action adaptation effect (p = .07), neither in central vision nor in peripheral vision. Our results suggest that action recognition mechanisms and action adaptation aftereffects are robust even in complex and distracting environments.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-280Does action recognition suffer in a crowded environment?1501715422FademrechtBBd2015_27LFademrechtIBülthoffNEBarracloughSde la RosaChicago, IL, USA2015-10-18Actions often occur in the visual periphery. Here we measured the spatial extent of action sensitive perceptual channels across the visual field using a behavioral action adaptation paradigm. Participants viewed an action (punch or handshake) for a prolonged amount of time (adaptor) and subsequently categorized an ambiguous test action as either 'punch' or 'handshake'. The adaptation effect refers to the biased perception of the test stimulus due to the prolonged viewing of the adaptor and the resulting loss of sensitivity to that stimulus. Therefore the more a channel responds to a specific stimulus the higher is the adaptation effect for that certain channel. We measured the size of the adaptation effect as a function of the spatial distance between adaptor and test stimuli in order to determine if actions can be processed in spatially distinct channels. Specifically, we adapted participants at 0° (fixation), 20° and 40° eccentricity in three separate conditions to measure the putative spatial extent of action channels at these positions. In each condition, we measured the size of the adaptation effect at -60°,-40°,-20°, 0°,20°,40°,60° of eccentricity. We fitted Gaussian functions to describe the channel response of each condition and used the full width at half maximum (FWHM) of the Gaussians as a measure of the spatial extent of the action channels. In contrast to previous reports of an increase of midget ganglion cell dendritic field size with eccentricity (Dacey, 1993), our results showed that FWHM decreased with eccentricity (FWHM at 0°: 56°, FWHM at 20°: 29, FWHM at 40°: 26). We then asked whether the response of these action sensitive perceptual channels can be used to predict average recognition performance (d') of social actions across the visual field obtained in a previous study (Fademrecht et al. 2014). We used G(x) - the summed response of all three channels at eccentricity x, to predict recognition performance at eccentricity x. A simple linear transformation of the summed channel response of the form a+b*G(x) was able to predict 95.5% of the variation in the recognition performance. Taken together these results demonstrate that actions can be processed in separate spatially distinct perceptual channels, their FWHM decreases with eccentricity and can be used to predict action recognition performance in the visual periphery.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0The spatial extent of action sensitive perceptual channels decrease with visual eccentricity1501715422FademrechtBd20157LFademrechtIBülthoffSde la RosaSt. Pete Beach, FL, USA2015-09-00494Although actions often appear in the visual periphery, little is known about action recognition outside of the fovea. Our previous results have shown that action recognition of moving life-size human stick figures is surprisingly accurate even in far periphery and declines non-linearly with eccentricity. Here, our aim was (1) to investigate the influence of motion information on action recognition in the periphery by comparing static and dynamic stimuli recognition and (2) to assess whether the observed non-linearity in our previous study was caused by the presence of motion because a linear decline of recognition performance with increasing eccentricity was reported with static presentations of objects and animals (Jebara et al. 2009; Thorpe et al. 2001). In our study, 16 participants saw life-size stick figure avatars that carried out six different social actions (three different greetings and three different aggressive actions). The avatars were shown dynamically and statically on a large screen at different positions in the visual field. In a 2AFC paradigm, participants performed 3 tasks with all actions: (a) They assessed their emotional valence; (b) they categorized each of them as greeting or attack and (c) they identified each of the six actions. We found better recognition performance for dynamic stimuli at all eccentricities. Thus motion information helps recognition in the fovea as well as in far periphery. (2) We observed a non-linear decrease of recognition performance for both static and dynamic stimuli. Power law functions with an exponent of 3.4 and 2.9 described the non-linearity observed for dynamic and static actions respectively. These non-linear functions describe the data significantly better (p=.002) than linear functions and suggest that human actions are processed differently from objects or animals.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-494Recognition of static and dynamic social actions in the visual periphery1501715422FademrechtBBd20157LFademrechtNEBarracloughIBülthoffSde la RosaLiverpool, UK2015-08-00214Although actions often appear in the visual periphery, little is known about action recognition away from fixation. We showed in previous studies that action recognition of moving stick-figures is surprisingly good in peripheral vision even at 75° eccentricity. Furthermore, there was no decline of performance up to 45° eccentricity. This finding could be explained by action sensitive units in the fovea sampling also action information from the periphery. To investigate this possibility, we assessed the horizontal extent of the spatial sampling area (SSA) of action sensitive units in the fovea by using an action adaptation paradigm. Fifteen participants adapted to an action (handshake, punch) at the fovea were tested with an ambiguous action stimulus at 0°, 20°, 40° and 60° eccentricity left and right of fixation. We used a large screen display to cover the whole horizontal visual field of view. An adaptation effect was present in the periphery up to 20° eccentricity (p<0.001), suggesting a large SSA of action sensitive units representing foveal space. Hence, action recognition in the visual periphery might benefit from a large SSA of foveal units.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-214Seeing actions in the fovea influences subsequent action recognition in the periphery1501715422delaRosaWBFSMC20157Sde la RosaYWahnHHBülthoffLFademrechtASaultonTMeilingerD-SChangBudapest, Hungary2015-07-0253Associating sensory action information with the correct action interpretation (semantic action categorization (SAC)) is important for successful joint action, e.g. for the generation of an appropriate complementary response. Vision for perception and vision for action has been suggested to rely on different visual mechanisms (two
streams hypothesis). To better understand visual processes supporting joint actions, we compared SAC processes in passive observation and in joint actions. If passive observation and joint action taps into different SAC processes, then adapting SAC processes during passive observation should not affect the generation of complementary action responses. We used an action adaptation paradigm to selectively measure SAC processes
in a novel virtual reality set up, which allowed participants to naturally interact with a human looking avatar. Participants visually adapted to an action of an avatar and gave a SAC judgment about a subsequently presented ambiguous action in three different experimental conditions: (1) by pressing a button (passive condition) or
by either creating an action response (2) subsequently to
(active condition) or (3) simultaneously with (joint action condition) the avatar's action. We found no significant difference between the three conditions suggesting that
SAC mechanisms for passive observation and joint action shares similar processes.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-53Does the two streams hypothesis hold for joint actions?1501715422FademrechtBd2014_27LFademrechtIBülthoffSde la RosaBeograd, Serbia2014-08-00103Recognizing actions of others in the periphery is required for fast and appropriate reactions to events in our environment (e.g. seeing kids running towards the street when driving). Previous results show that action recognition is surprisingly accurate even in far periphery (<=60° visual angle (VA)) when actions were directed towards the observer (front view). The front view of a person is considered to be critical for social cognitive processes (Schillbach et al., 2013). To what degree does the orientation of the observed action (front vs. profile view) influence the identification of the action and the recognition of the action's valence across the horizontal visual field? Participants saw life-size stick figure avatars that carried out one of six motion-captured actions (greeting actions: handshake, hugging, waving and aggressive actions: slapping, punching and kicking). The avatar was shown on a large screen display at different positions up to 75° VA. Participants either assessed the emotional valence of the action or identified the action either as ‘greeting’ or as ‘attack’. Orientation had no significant effect on accuracy. Reaction times were significantly faster for profile than for front views (p=0.003) for both tasks, which is surprising in light of recent suggestionsnonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-103A matter of perspective: action recognition depends on stimulus orientation in the periphery1501715422FademrechtBd2014_37LFademrechtIBülthoffSde la RosaTübingen, Germany2014-06-00nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Peripheral Vision and Action Recognition1501715422FademrechtBd20147LFademrechtIBülthoffSde la RosaSt. Pete Beach, FL, USA2014-05-201006The recognition of actions is critical for human social functioning and provides insight into both the active and the inner states (e.g. valence) of another person. Although actions often appear in the visual periphery little is known about action recognition beyond foveal vision. Related previous research showed that object recognition and object valence (i.e. positive or negative valence) judgments are relatively unaffected by presentations up to 13° visual angle (VA) (Calvo et al. 2010). This is somewhat surprising given that recognition performance of words and letters sharply decline in the visual periphery. Here participants recognized an action and evaluated its valence as a function of eccentricity. We used a large screen display that allowed presentation of stimuli over a visual field from -60 to +60° VA. A life-size stick figure avatar carried out one of six motion captured actions (3 positive actions: handshake, hugging, waving; 3 negative actions: slapping, punching and kicking). 15 participants assessed the valence of the action (positive or negative action) and another 15 participants identified the action (as fast and as accurately as possible). We found that reaction times increased with eccentricity to a similar degree for the valence and the recognition task. In contrast, accuracy performance declined significantly with eccentricity for both tasks but declined more sharply for the action recognition task. These declines were observed for eccentricities larger than 15° VA. Thus, we replicate the findings of Calvo et al. (2010) that recognition is little affected by extra-foveal presentations smaller than 15° VA. Yet, we additionally demonstrate that visual recognition performance of actions declined significantly at larger eccentricities. We conclude that large eccentricities are required to assess the effect of peripheral presentation on visual recognition.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-1006Influence of eccentricity on action recognition1501715422FademrechtdB201710LFademrechtSde la RosaHHBülthoff