Social & Spatial Cognition

Social & Spatial Cognition

Interaction with the social and spatial environment are fundamental components of daily life. Previous research mainly examined both domains separately. However, social and spatial processes interact. For example, joint actions happen within space and social factors such as personal distance are expressed spatially. The goal of this group is to examine social and spatial cognition and importantly the relation between both. Specifically, we investigate representations underlying navigation, action perception and execution, and the representational overlap between human bodies, social properties, and space.

We examine these questions by using experiments that closely mimic real-life conditions. Virtual Reality and Motion Capture allow for realistic stimuli environments such as virtual humans or buildings with which participants continuously interact (see video for learning an environment or the interaction with an avatar). In combination with behavioral psychophysics, fMRI and cognitive modeling we are working towards examining cognitive processes underlying human everyday behavior.

Main research areas

The main focus of our social cognitive research is on understanding social cognition in social situation akin to real life social interactions. We therefore use virtual and augmented reality to create interactive experiments; wide screen displays to examine social cognitive processes over the entire visual field; action morphing to create highly controllable and natural looking social stimuli; and tasks designed to target social cognitive processes employed in real life social cognitive (e.g. telling different actions apart). We are interested in non-visual social information processing of the facial and bodily cues:

Face recognition. Humans are social beings and recognizing socially relevant information from other humans is critical for social functioning, e.g. when judging the emotional state of another person during a discussion. Facial expressions provide powerful non-verbal cues about the cognitive and emotional state of another person. Although humans have seemingly little difficulty to use these visual cues, the underlying psychological processes are complex and far from being fully understood. In this project we examine social information processing of faces. [more]

Action recognition. Telling two actions apart (action discrimination) is important for everyday social life, for example, when generating an appropriate response to the observed action or estimate the cognitive states of others (theory of mind). We investigate this largely neglected but very important human ability using novel in-house developed techniques, e.g. action morphing (see app below), and methods, e.g. action adaptation. We identify the functional and anatomical properties of action recognition processes (e.g. sensitivity to dynamic information, motor input, and social context) using behavioral paradigms and imaging techniques. [more]

Cognition in social interactions. The ultimate goal of social cognitive research is to understand human social behavior in real life, e.g. when humans interact with each other such as when they are shaking hands. So far methodological constraints, e.g. poor experimental control over an interactive real-life situation, have prevented researchers from examine action-perception couplings of social behavior in interactive scenarios. We overcome this impasse by using virtual reality to create life size experimental setups that allow participants to naturally interact with an avatar within a fully computer controlled experimental situation (see movie above). Our goal is to the neural mechanisms of human social behavior when probed under realistic conditions. [more]

Learning and navigating complex environments. Knowledge about complex city and building environments acquired from navigation is represented in multiple, local reference frames corresponding, for example, to streets or corridors. This local information also incorporates local geometry as defined by walls. Contrary, (prior) map exposure yields a single, map-up oriented reference frame which is used for configurational or survey tasks within novel and highly familiar environments, but not for selecting routes. The underlying route knowledge focuses on turns with is enabled by a default strategy of walking straight. Route guidance providing unambiguous turning information will yield enhanced wayfinding performance. [more]

Spatial integration. Rooms, buildings and cities cannot be grasped within one glance. In order to get their overall structure, one has to integrate multiple experiences which relate via movement trajectories or common landmarks. We showed that reference frames used for integration depend on the learning time, available movement trajectories and the knowledge from where to use the information afterwards. Integration of city streets happens within an incremental process conducted from one current location. [more]

Self-movement perception and representation of 3D space. Humans locomote along the horizontal plane. However, within buildings they also have a vertical position. This project examines whether vertical movements are perceived in the same way as horizontal movements and how spaces extending also vertically such as buildings or cupboards are represented in memory. Therefore this project bridges perception and memory. [more]

Verbal and non-verbal codes in spatial memory. In which format do humans represent their environment? Our results indicate that room locations and routes learned from navigation as well as from maps are not only encoded spatially, but also in a verbal code, e.g. as descriptions. The usefulness of verbal encoding, however, seems to depend on already established long-term associations between corresponding verbal and spatial elements, e.g., the label "T-intersection" and a spatial representation of a T-intersection. [more]

Theoretical foundations of spatial cognition. This research is concerned with conceptions of how space can be represented and processed theoretically and in the light of current empirical data. We argue that simple egocentric and allocentric conceptions are not sufficient to capture complex situations and suggest combining them into complex representations. I also proposed a theory for self-localisation, route- and survey navigation in environmental spaces. [more]

Body space. Bodies are physical and social entities. Distinguishing one’s body from the world is probably the most basic form of self-representation. Humans use visual and somatosensory cues for estimating location and shape of their body and body parts. Our results showed that prior methods used to distinguish between visual and somatosensory representations are not specific for body parts as assumed before, but also apply to spatial objects and parallel findings in memory for spatial object arrangements. Another line of research is concerned with the full body illusion in which participants can feel that a virtual body in front of them is actually their body. Our results show that contrary to earlier conceptions this feeling of body identity and the subjective body location go hand in hand and are not separated. In the further course of this project we aim at examining effects of the visual full-body illusion on somatosensory body-part estimation. [more]

Collaborative spatial problem solving. Almost all human-made objects involved collaborative spatial problem solving in the process of their construction or production. We aim to examine the underlying cognitive processes targeting at strategies, memory requirements, shared representations, and spatial transformations. First results on the case of collaborative search (e.g., firefighters searching a building for victims) indicate that collaborative in comparison to individual search might result in qualitative better, but not necessarily more efficient performance. We currently examine the interaction of working memory and search strategy and extend the problem to a collaborative 3D puzzle task. [more][more]

Social influences on space perception. Within this project we examine potential social influences on room and distance perception. Results show cultural differences on the estimation of rooms. Participants also estimate virtual characters and objects facing them as closer than when they face away from them. Most humans and animals perceive and interact with their front. We currently examining whether social processes indicated by perceived animacy, relation to attention and social distances, or non-social object processing are responsible for the observed asymmetries. [more]


Funding

Work from the group was funded by the EU projects ‘TANGO’, ‘JAST’, and ‘WAYFINDING’, by the DFG project ME 3476/2, the MPG-FHG project 'CoAvatar', and by the Humboldt Foundation.


Selected Publications

  • Meilinger T, Frankenstein J und Bülthoff HH (Oktober-2013) Learning to navigate: Experience versus maps Cognition 129(1) 24–30.
  • Frankenstein J, Mohler BJ, Bülthoff HH und Meilinger T (Februar-2012) Is the Map in Our Head Oriented North? Psychological Science 23(2) 120-125.
  • Streuber S, Knoblich G, Sebanz N, Bülthoff HH und de la Rosa S (Oktober-2011) The effect of social context on the use of visual information Experimental Brain Research 214(2) 273-284.
Go to Editor View