Group leaders

Stephan de la Rosa
Phone: +49 7071 601 606
Fax: +49 7071 601 616
Opens window for sending emaildelarosa[at]
Tobias Meilinger
Phone: +49 7071 601 615
Fax: +49 7071 601 616
Opens window for sending emailtobias.meilinger[at]



Opens internal link in current windowGroup Members

Five most recent Publications

25. Hinterecker T, Pretto P, de Winkel KN, Karnath H-O, Bülthoff HH and Meilinger T (October-2018) Body-relative horizontal–vertical anisotropy in human representations of traveled distances Experimental Brain Research 236(10) 2811-2827.
24. Strickrodt M, Bülthoff HH and Meilinger T (September-2018) Memory for navigable space is flexible and not restricted to exclusive local or global memory units Journal of Experimental Psychology: Learning, Memory, and Cognition Epub ahead.
23. Hinterecker T, Leroy C, Kirschhock M, Zhao M, Butz MV, Bülthoff HH and Meilinger T (August-2018) Spatial memory for vertical locations Journal of Experimental Psychology: Learning, Memory, and Cognition Epub ahead.
22. Weller M, Takahashi K, Watanabe K, Bülthoff HH and Meilinger T (August-2018) The Object Orientation Effect in Exocentric Distances Frontiers in Psychology 9(1374) 1-7.
21. de la Rosa S, Fademrecht L, Bülthoff HH, Giese MA and Curio C (August-2018) Two Ways to Facial Expression Recognition? Motor and Visual Information Have Different Effects on Facial Expression Recognition Psychological Science 29(8) 1257-1269.

Export as:
BibTeX, XML, Pubman, Edoc, RTF


13.6.2016 Manuscript accepted "Meilinger, T.,* Strickrodt, M.,* & Bülthoff, H.H. (accepted). Qualitative differences between environmental and vista space memory are caused by the separation of space, not by movement or successive perception. Cognition [*equal contributions]"
21.4.2016 Manuscript published Opens external link in new window"Meilinger, T. & Watanabe, K. (2016). Multiple Strategies for Spatial Integration of 2D Layouts within Working Memory. PLoS ONE, 11, 1-22."
20.4.2016 Tobi finished his Habilitation.
31.3.2016 Our manuscript Opens external link in new window"Visual adaptation dominates bimodal visual-motor action adaptation." got published in Scientific Reports.
8.2.2016 Manuscript published: Opens external link in new windowde la Rosa, S., Schillinger, F., Bülthoff, H., Schultz, J., Uludag, K. (accepted). fMRI adaptation between action observation and action execution reveals cortical areas with mirror neuron properties in human BA 44/45. Frontiers in Human Neuroscience.
1.2.2016 Manuscript published: Opens external link in new windowde la Rosa, S. , Ekramnia M., Bülthoff H.H. (accepted). Action recognition and movement direction discrimination tasks are associated with different adaptation patterns. Frontiers in Human Neuroscience
1.2.2016 Paper published: Meilinger, T., Schulte-Pelkum, J., Frankenstein, J., Hardiess, G. Mallot, H.A., Bülthoff, H.H. (2016). How to best name a place? Facilitation and inhibition of route learning due to descriptive and arbitrary location labels. Frontiers in Psychology,7, 1-7.
29.1.2016 Paper published: Opens external link in new windowJung, E., Takahashi, K., Watanabe, K, de la Rosa, S. Butz, M.V., Bülthoff, H.H. & Meilinger, T. (accepted). The Influence of Human Body Orientation on Distance Judgments. Frontiers in Psychology
8.1-2016 Manuscript published: Opens external link in new windowFademrecht L.,Bülthoff I., de la Rosa, S.. Action recognition in the visual periphery. Journal of Vision
24.1.2016 Manuscript accepted: Hinterecker, T., Knauff, M., Johnson-Laird, P. N. (accepted). Modality, probability, and mental models. Journal of Experimental Psychology: Learning, Memory, and Cognition.
6.1.2016 Paper published: Saulton, A., Longo, M. R., Wong, H. Y., Bülthoff, H. H., & de la Rosa, S. (2016). Opens external link in new windowThe role of visual similarity and memory in body model distortions. Acta Psychologica, 164, 103–111.
24.12.2015 Paper published: Strickrodt, M., O'Malley, M., & Wiener, J. M. (2015). This Place Looks Familiar—How Navigators Distinguish Places with Ambiguous Landmark Objects When Learning Novel Routes. Frontiers in Psychology, 6.
1.12.15 Paper published: Chang D-S., Burger F., Bülthoff H.H., (2015). The Perception of Cooperativeness Without Any Visual or Auditory Communication. i-Perception 6(6) 1-6, doi: 10.1177/2041669515619508
01.06.15: Manuscript accepted: Meilinger, T., * Frankenstein, J., * Simon, N., Bülthoff, H.H., & Bresciani, J.-P. (accepted). Not all memories are the same: situational context influences spatial recall within ones city of residency. Psychonomic Bulletin and Review. [*equal contributions]
01.02.2015: Manuscript 'Objects exhibit body model like shape distortions' of Aurelie Saulton got accepted.
02.12.2014: Symposium at ECVP 2015 on 'Interactive Social Perception and Action' accepted (organizers: de la Rosa, Canal-Bruland, Bülthoff)
02.12.2014: Opens external link in new windowDong-Seon Chang gave his science slam in front of the German Minister of Education and Research.
01.12.2014: Opens external link in new windowDong-Seon Chang won the National German Science Slam competition.

21.11.2014: Manuscript about 'Categorization of social interactions' accepted in Visual Cognition.
19.11.2014: Opens external link in new windowDong-Seon Chang won the 10th Science Slam in Stuttgart (on national TV).
10.11.14 Manuscript "Reference frames in learning from maps and navigation" accepted in Psychological Research

08.11.14 Manuscript "When in doubt follow your nose: a wayfinding strategy" accepted in Frontiers in Psychology
01.10.2014: Stephan de la Rosa organized a Symposium on Social Interaction at KogWis

28.9.2014: Dong-Seon Chang organized Doctoral symposium at KogWis

20.9.2014: Aurelie Saulton won 2nd price in Science Slam at Visions in Science meeting

31.3.2014: Interview with Stephan and Tobi in ORF Radiokolleg


Social Cognition

Humans have often little difficulties inferring actions, emotions and cognitive states of another person. Visual information is critical for these inferences. Our work aims at elucidating the relatively unknown cognitive processes that support these social inferences from visual information. Moreover we are interested in how this information facilitates social interaction. For these reason we would like to understanding social cognitive processes under more natural social interaction conditions.
Previous action recognition research has focused on explaining how the visual system might use local visual features to derive a global action percept. An important aspect of action understanding has received, however, little attention. Specifically, little is known about how an action percept is filled with semantic meaning. This process of mapping visual information onto semantic meaning is important for everyday social functioning because it allows the observer to interpret the same visual pattern in different ways. For example, seeing someone laughing after another person has told a joke has the meaning of 'laughing with someone'; in contrast, seeing someone laugh after another person fell has the meaning of 'laughing about someone'. The cognitive structure that allows this flexible recognition of actions is widely unknown. We examine this cognitive structure using computational models of actions that allow action morphing.
Influential theories suggest that 'perception for action' and 'perception for recognition' might rely on different cognitive-perceptual mechanisms and that interactive experimental scenarios ('second-person-perspective') are essential for understanding social-cognitive processes of social interactions. These theoretical accounts are in contrast to several studies in which participants are passive observers of actions rather than active agents. To better understand action recognition in real-life situations, we examine action recognition in interactive scenarios. By means of virtual reality we create virtual interaction partners (avatars) with which participants interact. This approach allows high experimental control over the stimulus while - at the same time - enables the participant to interact as naturally as possible with another person.

Face recognition

Humans are social beings and recognizing socially relevant information from other humans is critical for social functioning, e.g. when judging the emotional state of another person during a discussion. Facial expressions provide powerful non-verbal cues about the cognitive and emotional state of another person. Although humans have seemingly little difficulty to use these visual cues, the underlying psychological processes are complex and far from being fully understood. In this project we examine social information processing of faces.
One key feature of facial expressions is that they are inherently dynamic. Surprisingly, much of the previous research has examined visual recognition of static facial expressions. In this project we focus on examining dynamic facial expressions and how visual and motor cues contribute to the recognition of facial expressions.
Kathrin Kaulard, Stephan de la Rosa
Cristobal Curio, Martin Giese, Johannes Schulz, Christian Wallraven
- de la Rosa, S., Giese, M., Bülthoff, H. H., & Curio, C. (2013). The contribution of different cues of facial movement to the emotional facial expression adaptation aftereffect. Journal of Vision, 13(1), 23. doi:10.1167/13.1.23

Action recognition

Actions provide another source of information that informs observers about the emotional and active state of another person. In this project we examine how are able to understand an action based on the visual information of an action. Several variables are deemed important for the recognition of an action, namely dynamic action information, the observer's motor system, and the social context. We examine these and other factors using behavioral paradigms and fMRI.
Stephan de la Rosa, Dong-Seon Chang,
Cristobal Curio, Nick Barraclough, Martin Giese, Johannes Schulz
- de la Rosa, S., Bülthoff H.H. (in press). Motor-visual neurons and action recognition in social interactions. Commentary on Mirror neurons: From origin to function Behavioral and Brain Sciences

Cognition in social interactions

Humans are social beings and physically interacting with other people (social interactions, e.g. when shaking hands) is part of everyone's daily routine. Surprisingly little behavioral research has examined the how humans use visual information to gain knowledge about the social action of other people. In this project we aim to further our knowledge in this relatively novel field. Among other approaches, we use virtual reality to examine the processes involved in the visual recognition of social interactions and when people engage in a social interaction.
Stephan de la Rosa, Stephan Streuber, Sarah Mieskes
Günther Knoblich, Nathalie Sebanz, Betty Mohler, Shimon Ullman, Liav Asif, Hong Yu Wong, Cristobal Curio
- de la Rosa, S., Mieskes, S., Bülthoff, H. H., & Curio, C. (2013). View dependencies in the visual recognition of social interactions. Frontiers in Psychology, 4. doi:10.3389/fpsyg.2013.00752
- Streuber, S., Knoblich, G., Sebanz, N., Bülthoff, H. H., & de la Rosa, S. (2011). The effect of social context on the use of visual information. Experimental Brain Research, 214(2), 273–84. doi:10.1007/s00221-011-2830-9
- Streuber, S., Mohler, B. J., Bülthoff, H. H., & Rosa, S. de la. (2012). The Influence of Visual Information on the Motor Control of Table Tennis Strokes. Presence, 21, 281-294.
- de la Rosa, S, Choudhery, R. N., Bülthoff, H.H., Asif, L. Ullman, S., Cristobal, C. (under review). Visual recognition of social interactions.
Last updated: Friday, 23.02.2018