Group leaders

Stephan de la Rosa
Phone: +49 7071 601 606
Fax: +49 7071 601 616
Opens window for sending emaildelarosa[at]
Tobias Meilinger
Phone: +49 7071 601 615
Fax: +49 7071 601 616
Opens window for sending emailtobias.meilinger[at]



Opens internal link in current windowGroup Members

Five most recent Publications

25. Hinterecker T, Pretto P, de Winkel KN, Karnath H-O, Bülthoff HH and Meilinger T (October-2018) Body-relative horizontal–vertical anisotropy in human representations of traveled distances Experimental Brain Research 236(10) 2811-2827.
24. Strickrodt M, Bülthoff HH and Meilinger T (September-2018) Memory for navigable space is flexible and not restricted to exclusive local or global memory units Journal of Experimental Psychology: Learning, Memory, and Cognition Epub ahead.
23. Hinterecker T, Leroy C, Kirschhock M, Zhao M, Butz MV, Bülthoff HH and Meilinger T (August-2018) Spatial memory for vertical locations Journal of Experimental Psychology: Learning, Memory, and Cognition Epub ahead.
22. Weller M, Takahashi K, Watanabe K, Bülthoff HH and Meilinger T (August-2018) The Object Orientation Effect in Exocentric Distances Frontiers in Psychology 9(1374) 1-7.
21. de la Rosa S, Fademrecht L, Bülthoff HH, Giese MA and Curio C (August-2018) Two Ways to Facial Expression Recognition? Motor and Visual Information Have Different Effects on Facial Expression Recognition Psychological Science 29(8) 1257-1269.

Export as:
BibTeX, XML, Pubman, Edoc, RTF


13.6.2016 Manuscript accepted "Meilinger, T.,* Strickrodt, M.,* & Bülthoff, H.H. (accepted). Qualitative differences between environmental and vista space memory are caused by the separation of space, not by movement or successive perception. Cognition [*equal contributions]"
21.4.2016 Manuscript published Opens external link in new window"Meilinger, T. & Watanabe, K. (2016). Multiple Strategies for Spatial Integration of 2D Layouts within Working Memory. PLoS ONE, 11, 1-22."
20.4.2016 Tobi finished his Habilitation.
31.3.2016 Our manuscript Opens external link in new window"Visual adaptation dominates bimodal visual-motor action adaptation." got published in Scientific Reports.
8.2.2016 Manuscript published: Opens external link in new windowde la Rosa, S., Schillinger, F., Bülthoff, H., Schultz, J., Uludag, K. (accepted). fMRI adaptation between action observation and action execution reveals cortical areas with mirror neuron properties in human BA 44/45. Frontiers in Human Neuroscience.
1.2.2016 Manuscript published: Opens external link in new windowde la Rosa, S. , Ekramnia M., Bülthoff H.H. (accepted). Action recognition and movement direction discrimination tasks are associated with different adaptation patterns. Frontiers in Human Neuroscience
1.2.2016 Paper published: Meilinger, T., Schulte-Pelkum, J., Frankenstein, J., Hardiess, G. Mallot, H.A., Bülthoff, H.H. (2016). How to best name a place? Facilitation and inhibition of route learning due to descriptive and arbitrary location labels. Frontiers in Psychology,7, 1-7.
29.1.2016 Paper published: Opens external link in new windowJung, E., Takahashi, K., Watanabe, K, de la Rosa, S. Butz, M.V., Bülthoff, H.H. & Meilinger, T. (accepted). The Influence of Human Body Orientation on Distance Judgments. Frontiers in Psychology
8.1-2016 Manuscript published: Opens external link in new windowFademrecht L.,Bülthoff I., de la Rosa, S.. Action recognition in the visual periphery. Journal of Vision
24.1.2016 Manuscript accepted: Hinterecker, T., Knauff, M., Johnson-Laird, P. N. (accepted). Modality, probability, and mental models. Journal of Experimental Psychology: Learning, Memory, and Cognition.
6.1.2016 Paper published: Saulton, A., Longo, M. R., Wong, H. Y., Bülthoff, H. H., & de la Rosa, S. (2016). Opens external link in new windowThe role of visual similarity and memory in body model distortions. Acta Psychologica, 164, 103–111.
24.12.2015 Paper published: Strickrodt, M., O'Malley, M., & Wiener, J. M. (2015). This Place Looks Familiar—How Navigators Distinguish Places with Ambiguous Landmark Objects When Learning Novel Routes. Frontiers in Psychology, 6.
1.12.15 Paper published: Chang D-S., Burger F., Bülthoff H.H., (2015). The Perception of Cooperativeness Without Any Visual or Auditory Communication. i-Perception 6(6) 1-6, doi: 10.1177/2041669515619508
01.06.15: Manuscript accepted: Meilinger, T., * Frankenstein, J., * Simon, N., Bülthoff, H.H., & Bresciani, J.-P. (accepted). Not all memories are the same: situational context influences spatial recall within ones city of residency. Psychonomic Bulletin and Review. [*equal contributions]
01.02.2015: Manuscript 'Objects exhibit body model like shape distortions' of Aurelie Saulton got accepted.
02.12.2014: Symposium at ECVP 2015 on 'Interactive Social Perception and Action' accepted (organizers: de la Rosa, Canal-Bruland, Bülthoff)
02.12.2014: Opens external link in new windowDong-Seon Chang gave his science slam in front of the German Minister of Education and Research.
01.12.2014: Opens external link in new windowDong-Seon Chang won the National German Science Slam competition.

21.11.2014: Manuscript about 'Categorization of social interactions' accepted in Visual Cognition.
19.11.2014: Opens external link in new windowDong-Seon Chang won the 10th Science Slam in Stuttgart (on national TV).
10.11.14 Manuscript "Reference frames in learning from maps and navigation" accepted in Psychological Research

08.11.14 Manuscript "When in doubt follow your nose: a wayfinding strategy" accepted in Frontiers in Psychology
01.10.2014: Stephan de la Rosa organized a Symposium on Social Interaction at KogWis

28.9.2014: Dong-Seon Chang organized Doctoral symposium at KogWis

20.9.2014: Aurelie Saulton won 2nd price in Science Slam at Visions in Science meeting

31.3.2014: Interview with Stephan and Tobi in ORF Radiokolleg


Social and Spatial Cognition

Interaction with the social and spatial environment are fundamental components of daily life. Previous research mainly examined both domains separately. However, social and spatial processes interact. For example, joint actions happen within space and social factors such as personal distance are expressed spatially. The goal of this group is to examine social and spatial cognition and importantly the relation between both. Specifically, we investigate representations underlying navigation, action perception and execution, and the representational overlap between human bodies, social properties, and space.

We examine these questions by using experiments that closely mimic real-life conditions. Virtual Reality and Motion Capture allow for realistic stimuli environments such as virtual humans or buildings with which participants continuously interact (see video for learning an environment or the interaction with an avatar). In combination with behavioral psychophysics, fMRI and cognitive modeling we are working towards examining cognitive processes underlying human everyday behavior.

Main research areas


Opens internal link in current windowSocial Cognition

The main focus of our social cognitive research is on understanding social cognition in social situation akin to real life social interactions. We therefore use virtual and augmented reality to create interactive experiments; wide screen displays to examine social cognitive processes over the entire visual field; action morphing to create highly controllable and natural looking social stimuli; and tasks designed to target social cognitive processes employed in real life social cognitive (e.g. telling different actions apart). 
We are interested in non-visual social information processing of the facial and bodily cues:
Face recognition. Humans are social beings and recognizing socially relevant information from other humans is critical for social functioning, e.g. when judging the emotional state of another person during a discussion. Facial expressions provide powerful non-verbal cues about the cognitive and emotional state of another person. Although humans have seemingly little difficulty to use these visual cues, the underlying psychological processes are complex and far from being fully understood. In this project we examine social information processing of faces. Opens internal link in current window[more]
Action recognition. Telling two actions apart (action discrimination) is important for everyday social life, for example, when generating an appropriate response to the observed action or estimate the cognitive states of others (theory of mind). We investigate this largely neglected but very important human ability using novel in-house developed techniques, e.g. action morphing (see app below), and methods, e.g. action adaptation. We identify the functional and anatomical properties of action recognition processes (e.g. sensitivity to dynamic information, motor input, and social context) using behavioral paradigms and imaging techniques. Opens internal link in current window[more]
Cognition in social interactions. The ultimate goal of social cognitive research is to understand human social behavior in real life, e.g. when humans interact with each other such as when they are shaking hands. So far methodological constraints, e.g. poor experimental control over an interactive real-life situation, have prevented researchers from examine action-perception couplings of social behavior in interactive scenarios. We overcome this impasse by using virtual reality to create life size experimental setups that allow participants to naturally interact with an avatar within a fully computer controlled experimental situation (see movie above). Our goal is to the neural mechanisms of human social behavior when probed under realistic conditions.  Opens internal link in current window[more]
(Action morph. Use the slider to morph between actions. Drag with the mouse on the viewport to rotate. If stopped click to start.)

Opens internal link in current windowSpatial Cognition

Learning and navigating complex environments. Knowledge about complex city and building environments acquired from navigation is represented in multiple, local reference frames corresponding, for example, to streets or corridors. This local information also incorporates local geometry as defined by walls. Contrary, (prior) map exposure yields a single, map-up oriented reference frame which is used for configurational or survey tasks within novel and highly familiar environments, but not for selecting routes. The underlying route knowledge focuses on turns with is enabled by a default strategy of walking straight. Route guidance providing unambiguous turning information will yield enhanced wayfinding performance. Opens internal link in current window[more]
Spatial integration. Rooms, buildings and cities cannot be grasped within one glance. In order to get their overall structure, one has to integrate multiple experiences which relate via movement trajectories or common landmarks. We showed that reference frames used for integration depend on the learning time, available movement trajectories and the knowledge from where to use the information afterwards. Integration of city streets happens within an incremental process conducted from one current location. Opens internal link in current window[more]
Self-movement perception and representation of 3D space. Humans locomote along the horizontal plane. However, within buildings they also have a vertical position. This project examines whether vertical movements are perceived in the same way as horizontal movements and how spaces extending also vertically such as buildings or cupboards are represented in memory. Therefore this project bridges perception and memory. Opens internal link in current window[more]
Verbal and non-verbal codes in spatial memory. In which format do humans represent their environment? Our results indicate that room locations and routes learned from navigation as well as from maps are not only encoded spatially, but also in a verbal code, e.g. as descriptions. The usefulness of verbal encoding, however, seems to depend on already established long-term associations between corresponding verbal and spatial elements, e.g., the label "T-intersection" and a spatial representation of a T-intersection. Opens internal link in current window[more]
Theoretical foundations of spatial cognition. This research is concerned with conceptions of how space can be represented and processed theoretically and in the light of current empirical data. We argue that simple egocentric and allocentric conceptions are not sufficient to capture complex situations and suggest combining them into complex representations. I also proposed a theory for self-localisation, route- and survey navigation in environmental spaces. Opens internal link in current window[more]

Opens internal link in current windowSocial-spatial interactions

Body space. Bodies are physical and social entities. Distinguishing one’s body from the world is probably the most basic form of self-representation. Humans use visual and somatosensory cues for estimating location and shape of their body and body parts. Our results showed that prior methods used to distinguish between visual and somatosensory representations are not specific for body parts as assumed before, but also apply to spatial objects and parallel findings in memory for spatial object arrangements. Another line of research is concerned with the full body illusion in which participants can feel that a virtual body in front of them is actually their body. Our results show that contrary to earlier conceptions this feeling of body identity and the subjective body location go hand in hand and are not separated. In the further course of this project we aim at examining effects of the visual full-body illusion on somatosensory body-part estimation. Opens internal link in current window[more]
Collaborative spatial problem solving. Almost all human-made objects involved collaborative spatial problem solving in the process of their construction or production. We aim to examine the underlying cognitive processes targeting at strategies, memory requirements, shared representations, and spatial transformations. First results on the case of collaborative search (e.g., firefighters searching a building for victims) indicate that collaborative in comparison to individual search might result in qualitative better, but not necessarily more efficient performance. We currently examine the interaction of working memory and search strategy and extend the problem to a collaborative 3D puzzle task.  Opens internal link in current window[more]
Social influences on space perception. Within this project we examine potential social influences on room and distance perception. Results show cultural differences on the estimation of rooms. Participants also estimate virtual characters and objects facing them as closer than when they face away from them. Most humans and animals perceive and interact with their front. We currently examining whether social processes indicated by perceived animacy, relation to attention and social distances, or non-social object processing are responsible for the observed asymmetries. Opens internal link in current window[more]


Work from the group was funded by the EU projects ‘TANGO’, ‘JAST’, and ‘WAYFINDING’, by the DFG project ME 3476/2, the MPG-FHG project 'CoAvatar', and by the Humboldt Foundation.

Selected Publications

Meilinger T, Frankenstein J and Bülthoff HH (October-2013) Learning to navigate: Experience versus maps Cognition 129(1) 24–30.
Frankenstein J, Mohler BJ, Bülthoff HH and Meilinger T (February-2012) Is the Map in Our Head Oriented North? Psychological Science 23(2) 120-125.
Streuber S, Knoblich G, Sebanz N, Bülthoff HH and de la Rosa S (October-2011) The effect of social context on the use of visual information Experimental Brain Research 214(2) 273-284.

Export as:
BibTeX, XML, Pubman, Edoc, RTF
Last updated: Friday, 23.02.2018