The main focus of our social cognitive research is on understanding social cognition in social situation akin to real life social interactions. We therefore use virtual and augmented reality to create interactive experiments; wide screen displays to examine social cognitive processes over the entire visual field; action morphing to create highly controllable and natural looking social stimuli; and tasks designed to target social cognitive processes employed in real life social cognitive (e.g. telling different actions apart).
We are interested in non-visual social information processing of the facial and bodily cues:
Humans are social beings and recognizing socially relevant information from other humans is critical for social functioning, e.g. when judging the emotional state of another person during a discussion. Facial expressions provide powerful non-verbal cues about the cognitive and emotional state of another person. Although humans have seemingly little difficulty to use these visual cues, the underlying psychological processes are complex and far from being fully understood. In this project we examine social information processing of faces. [more]
Actions provide another source of information that informs observers about the emotional and active state of another person. In this project we examine how are able to understand an action based on the visual information of an action and how this information is being used for social interactions. Several variables are deemed important for the recognition of an action, namely dynamic action information, the observer's motor system, and the social context. We examine these and other factors using behavioral paradigms and fMRI. [more]
Cognition in social interactions.
Humans are social beings and physically interacting with other people (social interactions, e.g. when shaking hands) is part of everyone's daily routine. Surprisingly little behavioral research has examined the how humans use visual information to gain knowledge about the social action of other people. In this project we aim to further our knowledge in this relatively novel field. Among other approaches, we use virtual reality to examine the processes involved in the visual recognition of social interactions and when people engage in a social interaction. [more]
Learning and navigating complex environments. Knowledge about complex city and building environments acquired from navigation is represented in multiple, local reference frames corresponding, for example, to streets or corridors. This local information also incorporates local geometry as defined by walls. Contrary, (prior) map exposure yields a single, map-up oriented reference frame which is used for configurational or survey tasks within novel and highly familiar environments, but not for selecting routes. The underlying route knowledge focuses on turns with is enabled by a default strategy of walking straight. Route guidance providing unambiguous turning information will yield enhanced wayfinding performance. [more]
Spatial integration. Rooms, buildings and cities cannot be grasped within one glance. In order to get their overall structure, one has to integrate multiple experiences which relate via movement trajectories or common landmarks. We showed that reference frames used for integration depend on the learning time, available movement trajectories and the knowledge from where to use the information afterwards. Integration of city streets happens within an incremental process conducted from one current location. [more]
Self-movement perception and representation of 3D space.
Humans locomote along the horizontal plane. However, within buildings they also have a vertical position. This project examines whether vertical movements are perceived in the same way as horizontal movements and how spaces extending also vertically such as buildings or cupboards are represented in memory. Therefore this project bridges perception and memory. [more]
Verbal and non-verbal codes in spatial memory.
In which format do humans represent their environment? Our results indicate that room locations and routes learned from navigation as well as from maps are not only encoded spatially, but also in a verbal code, e.g. as descriptions. The usefulness of verbal encoding, however, seems to depend on already established long-term associations between corresponding verbal and spatial elements, e.g., the label "T-intersection" and a spatial representation of a T-intersection. [more]
Theoretical foundations of spatial cognition.
This research is concerned with conceptions of how space can be represented and processed theoretically and in the light of current empirical data. We argue that simple egocentric and allocentric conceptions are not sufficient to capture complex situations and suggest combining them into complex representations. I also proposed a theory for self-localisation, route- and survey navigation in environmental spaces. [more]
Bodies are physical and social entities. Distinguishing one’s body from the world is probably the most basic form of self-representation. Humans use visual and somatosensory cues for estimating location and shape of their body and body parts. Our results showed that prior methods used to distinguish between visual and somatosensory representations are not specific for body parts as assumed before, but also apply to spatial objects and parallel findings in memory for spatial object arrangements. Another line of research is concerned with the full body illusion in which participants can feel that a virtual body in front of them is actually their body. Our results show that contrary to earlier conceptions this feeling of body identity and the subjective body location go hand in hand and are not separated. In the further course of this project we aim at examining effects of the visual full-body illusion on somatosensory body-part estimation. [more]
Social influences on space perception.
Within this project we examine potential social influences on room and distance perception. Results show cultural differences on the estimation of rooms. Participants also estimate virtual characters and objects facing them as closer than when they face away from them. Most humans and animals perceive and interact with their front. We currently examining whether social processes indicated by perceived animacy, relation to attention and social distances, or non-social object processing are responsible for the observed asymmetries. [more]