Contact

Dr. Stephan de la Rosa

Adresse: Max-Planck-Ring 8
72076 Tübingen
Raum Nummer: 113
Tel.: 07071 601 606
Fax: 07071 601 616
E-Mail: delarosa

 

Bild von de la Rosa, Stephan, Dr.

Stephan de la Rosa

Position: Projektleiter  Abteilung: Bülthoff

I am leading the Social and Spatial Cognition Group together with Tobias Meilinger. For more information and demos about my work visit  http://www.stephandelarosa.de.

 

 

Overview

(please find more detailed description of my research on the tab 'Projects'):

 

Research

Teaching

Supervision

Other activities

 

 

 

 

Research

From lab to real life

The ultimate goal of psychological research is to understand and explain everyday human behavior. However, most of the experimental conditions, under which human behavior is assessed, have little resemblance with everyday life. While there is good reason for that (in fact it warrants statistical and logical inferences from the data) the question remains how one can close this gap between lab and real life.

Here I bridge the gap between lab and real real by using virtual, augmented and mixed realities. These technologies allow the creation of complex stimuli that allow participants to behave naturally in the experimental environment. At the same time these technologies give the degree of control required for scientific reasoning. In use thorough psychophysical methods to maintain a high internal validity. As a result the combination of virtual reality and psychophysical methods provide an experimental approach that has both high internal and ecological validity.

 

Social perception, cognition, and action

Humans are social beings and interacting with others, for example when shaking hands, is an integral part of human life. For many of us, participating in social interactions is very easy. Desipite this seemingly ease of social interactions, the underlying cognitive processes are highly complex. With my research I try to understand perceptual-cognitive processes that help humans to participate in real-life social interactions. My aim is therefore to examine social cognitive processes under as close to realistic conditions as possible. To achieve this goal I use virtual reality and computational models to create interactive experimental setups, which allow participants to behave naturally and thereby allows me to examine perception and action in social interactions.

 

 

 

Social Perception, Cognition, and Action

Social Action and Social Interaction Recognition

 

Humans are social beings. Bodily interaction with another person (social interaction) is an integral part of human life. We investigate the perceptual and cognitive processes in the recognition of social actions and social information processing in social interactions. We use 3D models of actions, virtual reality, and motion capture to create stimuli and experimental paradigms that give high experimental control and allow natural interaction.

 

In our work we focus on social actions (e.g. handshakes). Our research efforts concern the understanding of social cognitive processes under natural conditions. For this reason, for example, we examine visual processes underlying action recognition both when participants are passive observers of actions and when they are active agents in interactive settings.

 

Our current research questions concern the role of visual information in social interactions (Stephan Streuber, Maiken Lubkoll, Yannik Wahn); the contribution of motor system during action recognition (Frieder Schillinger, Ylva Ferstl); the visual recognition of social interactions (Sarah Mieskes); the role of peripheral vision in action recognition (Laura Fademrecht, Judith Nieuwenhuis), the cogntive hierarchy of action recognition, the association of visual action information with semantic meaning (Dong-Seon Chang), and the influence of context on social action recognition.

[To read more about related research projects click here]

 

We use various methods to address the research questions including behavioral experiments, virtual reality, motion capture, and fMRI.

 

Collaborators: Cristobal Curio, Isabelle Bülthoff, Heinrich Bülthoff, Martin Giese, Nathalie Sebanz, Günther Knoblich, Hong-Yu Wong, Johannes Schultz.

 

 

 

Body perception

 

Knowledge about the size and shape of our body is essential for perception and action. We examine the cognitive representation of the spatial human body structure to examine the plasticity of this representation using virtual reality (Aurelie Saulton). In collaboration with Martin Dobricki we examine the cognitive structure of bodily self perception and the body representation using full body illusions.

 

[To read more about related research projects click here]

 

Collaborators: Betty Mohler, Hong-Yu Wong

 

 

 

 

Emotional and Conversational Face Recognition

 

Facial expressions provide rich information about the cognitive and emotional states of another person. Although facial expressions are inherently dynamic relatively little is known about the perceptual-cognitive processes underlying facial expression recognition. We are interested in the underlying representations of dynamic facial expression. Kathrin Kaulard examined psychological processes involved in a novel class of facial expressions, namely conversational expressions. Among other things we are using adaptation paradigms and 3D facial models to examine the processes involved in emotional facial expression recognition (Alexander Bauer).

 

[To read more about related research projects click here]

 

Collaborators: Cristobal Curio, Christian Wallraven, Martin Giese.

 

 

 

Teaching

Teaching experience

I have taught several courses (e.g. statistics, perception, human computer interaction) at both the undergratudate and graduate level at the University of Toronto (Toronto/Canada), the Graduate School of Neural & Behavioural Sciences International at Max Planck Research School (Tübingen/Germany), and the University of Tübingen.

 

 

 

Supervision

Supervised students

 

  • Stephan Streuber (Ph.D. graduated in 2013)
  • Dong-Seon Chang (Ph.D. Candidate)
  • Aurelie Saulton (Ph.D. Candidate)
  • Frieder Schillinger (Diploma student graduated in 2011)
  • Sarah Mieskes (Masters student graduated in 2012)
  • Alexander Bauer (Bachelor graduated in 2013)
  • Ylva Ferstl (graduated 2015)
  • Judith Nieuwenhuis (research intern & Masters student)
  • George Fuller (research intern)

 

I am a co-supervisor for the following students:

  • Kathrin Kaulard (Ph.D., graduated 2014)
  • Laura Fademrecht (Ph.D. Candidate)

 

 

 

Other activities

Ombudsman

Nelson Totah, Michael Kerger, and I are the current ombudsmen for the Max Planck of Biological Cybernetics. Our role is to mediate in all kinds of work related sensitive matters. Please feel free to approach us personally or by e-mail. All information will be kept strictly confidential.

 

 

Management of the participant recruitment

I have developed an experiment database (web interface) for the recruitment of participants at the University of Toronto (using html, php, mysql) and the participant database for the Max Planck Institute Department Human Perception, Cognition, and Action (https://experiments.tuebingen.mpg.de).

 

Sylvain Perenes and Sebastiean Gatti have currently updated the experiment database at the MPI. This version is now up and running. at the address above.

 

Streaming Motion Capture into Psychtoolbox3

We are developed a processing pipline to display motion capture information as recorded by moven suits (XSENS) within the psychophysics toolbox.

Overview

My research projects examine how people process, represent, and understand visual information pertaining to social actions, body, and faces. I am working on several projects:

 

 

Social action and social interaction recognition

  • Social context sensitivity of action recognition processes
  • Action respresentation
  • Social interaction categorization
  • Perception and action in social interactions

 

 

Body representations

  • Measuring the representation of the human body
  • Bodily self-perception

 

 

Emotional and conversational facial expression recognition

  • Dynamic Face Adaptation
  • Motor-visual cross-talk

 

 

Visual recognition & methods

  • Dissociating different levels of object recognition
  • Displaying motion capture data within Psychtoolbox 3

 

 


 

 

Social Action and Social Interaction Recognition

Previous research efforts have explained action recognition in a bottom-up fashion, e.g. by identifying visual cues important for action recognition. However, the process of associating visual action information with semantic action knowledge (action recognition) is flexible and under top-down control. In our work try to describe and better understand this flexible mapping of visual information onto semantic knowledge. We examine the influences of social context, the nature of semantic representations, and the role of low level visual information in the fovea and periphery to better understand how humans are able to easily understand actions.

 

Social context sensitivity of action recognition processes

Action do not occur out of the blue but are embedded within a social context. Importantly the meaning of an action changes depending on the context in which is embedded. The differentiation between 'laughing about someone' and 'laughing with someone' is indicative for the social context being important for the interpretation of an action. Here we demonstrate that social context modulates action recognition mechanisms in a top-down fashion and provide the first evidence for action recognition being under top-down control. Specifically, action recognition mechanisms are sensitive to the action goal or action intention that is implied by social context rather than the visual action information.

de la Rosa S, Streuber S, Giese M, Bülthoff, HH (2014)

 

Action categorization and action representations

In this project we are examining the cognitive representations of social actions (=physical actions directed towards another person). In particular we are interested in the cognitive processes that support the human ability to tell action apart (action categorization). We examine to what degree action representations underlying the categorization of actions are sensitive to motion or semantic information (see Dong-Seon Chang). An important method for this investigation is action morphing, which allows the creation of action stimuli that cross semantic action category boundaries.

 

We examine how visual to what degree action representation of individual actions overlap with those of social interactions and whether social interactions are encoded in a view-dependent manner. Our results indicate that social interactions are encoded in a view dependent manner.

 

Action recognition is often understood of associating visual action information with one action interpretation. A largely unanswered question whether visual information can be associated with different action interpretations.  In this project we examine the different levels of cognitive representations in social interaction recognition. We are using an free categorization tasks to examine, whether and which social interaction humans perceive as more similar. Furthermore we try to determine the factors underlying this categorization. Our results show that the physical pattern of a social action is associated with several action interpretations. Moreover, the recognition of a social action is faster at its more general (i.e. basic level) than at its more specific (i.e. subordinate level) interpretation level. For example, a handshake is faster recognized as a greeting than a handshake.

Much of the visual action information falls within the visual periphery. For example, during driving visual action information (e.g. from people on the sidewalk) typically falls into the visual periphery. How well are we  able to recognize actions in the periphery and what kind of action information can we derive in the periphery. Laura Fademrecht is answering these and related questions in her Ph.D. research.

Selected publications:

de la Rosa S., Mieskes S, Curio C, Bülthoff HH (2013)

de la Rosa S Person, Choudhery R Person Curio C., Assif L, Ullman, S, and Bülthoff HH (2015)

 

 

Perception and action in social interactions

Humans usually do not passively observe social interactions but they actively participate in physical social interactions. Active participation in physical social interaction requires the interaction partners to coordinate their actions (e.g. when carrying a stretcher). In this project we examine social cognitive processes in interactive scenarios: particpants actively engage in social actions. We use virtual reality and motion capture techniqes to create experimental paradigms that allow natural interaction and provide high experimental control.

 

Image of the experimental setup
Do we use human-specific online motor control mechanisms in social interactions? In an ongoing project, we combined virtual reality, motion capture, and large screen displays to examine whether online motor control depends on the visual appearance of the interaction partner.

 

 



We examine social cognitive processes both in interactive open-loop and closed-loop scenarios. For example, we were among the first to examine social cognitive proceses in a closed-loop scenarios that allowed natural full-body interactions. We used this paradigm to examine which of the many environmental visual cues people use in action coordination. For example, they can use visual information about visual information of the interaction partner's body. Participants were playing table tennis in the dark and we manipulated the available visual information. We found that the information that we use about other people during (natural closed-loop) social interactions depends on the social context (i.e. whether people cooperate or compete).

 

Streuber S Person, Knoblich G , Sebanz N , Bülthoff HH Person and de la Rosa S Person (2011).


 

 

Body Representation

 

Measuring the representation of the human body

We examined the body specificity of common tasks (localization & template matching task) used to assess the cognitive representation of the human body. The results indicate that these tasks provide similar results for body and non-body items (e.g. objects). The findings suggest a careful evaluation of the results of localization and template matchin task with the respect to body specific effects.

Saulton, A., DoddsT., Bülthoff, H.H., de la Rosa (2015).

 

Bodily self-perception

In collaboration with Martin Dobricki we measure the cognitive dimension of bodily self perception using the full body illusion. We found evidence for bodily self-identification, space-related self-perception (spatial presence), and agency being consituent components of bodily self-perception.

Dobricki M, de la Rosa S (2013)

 

 


 

 

Emotional and conversational face recognition

 

 

Dynamic Face Adaptation

Faces provide rich information about the cognitive and emotional states of the interaction partner through facial expressions (e.g. a happy face). Previous research examined the cognitive representation of facial expression mainly using static images of facial expressions. In this project we are interested to what degree facial expression recognition mechanisms are tuned to different sources of dynamic facial information. In particular, we were interested in the tuning of facial expression recognition to rigid head motion and to the movement of facial features (e.g. mouth). Our results from a visual adapation experiment indicate that facial expression recognition mechanisms are differently sensitive to rigid head motion and intrinsic facial features. More specifically, we found that adaptation effects induced by head movement are dependent on the presence of intrinsic facial movement.

de la Rosa S Person, Giese MA Person Heinrich HH, and Curio C Person (2013)

 

 

Motor-visual cross-talk

In this project we examine the influence of the motor system on the perception of actions. We use behavioural as well as fMRI to examine the motor-visual linkage and its importance for action understanding.

Schillinger F Person, de la Rosa S Person, Schultz J Person and Uludag K (2010)

 

 


 

 

Visual recognition and methods

 

 

Dissociating different levels of object recognition

Objects can be recognized on several semantic levels. For example, it has been suggested that one can merely detecting the presence of an object without recognizing its identity. However more recent evidence suggests that detection and the explicit recognition are the same. Here we examined whether the differences in technical experimental setup (specifically the refresh rate and consequently the minimum possible presentation time) can accont for the lack of dissociating detection from explicit recognition. For very high monitor refresh rates (i.e. very short minimum presentation times) we could dissociate detection from explicit recognition. In contrast we were unable to find behavioural differences between detection and explicit recognition tasks for lower refresh rates (i.e. longer minimum presentation times). The results suggest that detection and explicit recognition are dissociable and that high refresh rates are required (i.e. very short minimum presentation times) to dissociate detection from explicit recognition.

de la Rosa S Person, Choudhery RN Person and Chatziastros A Person (2011)

 

 

Displaying motion capture data within Psychtoolbox 3

We are developing a pipeline to display motion capture (e.g. from MVN studio) data within the Psychtoolbox 3.

 

My research contributes to the EU funded research project TANGO.

Education

2008: Ph.D. Psychology
2003: Master of Arts in Psychology
2002: Diploma Geography with Computer Science and Sociology as minors

Präferenzen: 
Referenzen pro Seite: Jahr: Medium:

  
Zeige Zusammenfassung

Vorträge (36):

Meilinger T und de la Rosa S (Juni-25-2015) Abstract Talk: Human navigation in virtual large scale spaces, 6th Bernstein Sparks Workshop: Multi-modal closed-loop stimulation and virtual realities, Tutzing, Germany 4.
Chang D-S und de la Rosa S (Mai-21-2015) Invited Lecture: Action Recognition & Social Interaction: New Experimental Paradigms, Rutgers University, Center for Cognitive Science, New Brunswick, NJ, USA.
Chang D-S, Burger F, Bülthoff HH und de la Rosa S (März-24-2015) Abstract Talk: Differences in Behavior and Judgments during interaction with a rope without seeing or hearing the partner, Symposium on Reciprocity and Social Cognition, Berlin, Germany.
Foster C, Takahashi K, Kurek S, Horeis C, Bäuerle MJ, de la Rosa S, Watanabe K, Butz MV und Meilinger T (März-10-2015) Abstract Talk: Looking at me? Influence of facing orientation of avatars and objects on distance estimation, 57th Conference of Experimental Psychologists (TeaP 2015), Hildesheim, Germany 82.
de la Rosa S, Lubkoll M, Saulton A, Meilinger T, Bülthoff HH und Cañal-Bruland C (März-10-2015) Abstract Talk: Motor planning and control: You interact faster with a human than a robot, 57th Conference of Experimental Psychologists (TeaP 2015), Hildesheim, Germany 60.
de la Rosa S (September-28-2014) Keynote Lecture: Cognition of human actions: from individual actions to interactions, 12th Biannual Conference of the German Cognitive Science Society (KogWis 2014), Tübingen, Germany, Cognitive Processing, 15(Supplement 1) S11.
de la Rosa S (September-28-2014) Invited Lecture: Perceptual cognitive processes underlying the recognition of individual and interactive actions, 12th Biannual Conference of the German Cognitive Science Society (KogWis 2014), Tübingen, Germany, Cognitive Processing, 15(Supplement 1) S11.
de la Rosa S (September-24-2014) Abstract Talk: Visuell-kognitive Repräsentationen in der Erkennung von dynamischen sozialen Handlung, 49. Kongress der Deutschen Gesellschaft für Psychologie (DGPs 2014), Bochum, Germany 478.
Chang D-S, Bülthoff HH und de la Rosa S (September-2014) Abstract Talk: Action recognition and the semantic meaning of actions: how does the brain categorize different social actions?, 12th Biannual Conference of the German Cognitive Science Society (KogWis 2014), Tübingen, Germany, Cognitive Processing, 15(Supplement 1) S95.
de la Rosa S, Streuber S und Bülthoff HH (August-2014) Abstract Talk: The influence of context on the visual recognition of social actions, 14th Annual Meeting of the Vision Sciences Society (VSS 2014), St. Pete Beach, FL, USA, Journal of Vision, 14(10) 1469.
de la Rosa S und Bülthoff HH (August-2014) Keynote Lecture: What are you doing? Recent advances in visual action recognition research, 14th Annual Meeting of the Vision Sciences Society (VSS 2014), St. Pete Beach, FL, USA 9.
de la Rosa S (Juni-29-2014) Invited Lecture: The Important of Action Context in the Recognition and Observation of Human Actions, 6th International Conference on Brain and Cognitive Engineering (BCE 2014), Tübingen, Germany.
Chang D-S und de la Rosa S (März-2014) Invited Lecture: Beyond Action Recognition: Making Social Inferences from Action Observation, Interdisciplinary College Spring School 2014: Cognition 3.0 - - the social mind in the connected world, Günne, Germany.
de la Rosa S, Streuber S, Giese M, Bülthoff HH und Curio C (Juli-2013) Abstract Talk: Visual adaptation aftereffects to actions are modulated by high-level action interpretations, 13th Annual Meeting of the Vision Sciences Society (VSS 2013), Naples, FL, USA, Journal of Vision, 13(9) 126.
Schultz J, Fernandez Cruz AL, de la Rosa S, Bülthoff HH und Kaulard K (September-2012) Abstract Talk: How are facial expressions represented in the human brain?, 35th European Conference on Visual Perception, Alghero, Italy, Perception, 41(ECVP Abstract Supplement) 38.
Curio C, Giese M, Bülthoff HH und de la Rosa S (September-2012) Abstract Talk: Motor-visual effects in the recognition of dynamic facial expressions, 35th European Conference on Visual Perception, Alghero, Italy, Perception, 41(ECVP Abstract Supplement) 44.
Kaulard K, Wallraven C, de la Rosa S und Bülthoff HH (Oktober-2010) Abstract Talk: Cognitive categories of emotional and conversational facial expressions are influenced by dynamic information, 11th Conference of Junior Neuroscientists of Tübingen (NeNa 2010), Heiligkreuztal, Germany 16.
Kaulard K, Wallraven C, de la Rosa S und Bülthoff HH (August-2010) Abstract Talk: Cognitive categories of emotional and conversational facial expressions are influenced by dynamic information, 33rd European Conference on Visual Perception, Lausanne, Switzerland, Perception, 39(ECVP Abstract Supplement) 157.
de la Rosa S, Choudhery R und Bülthoff HH (August-2010) Abstract Talk: Social interaction recognition and object recognition have different entry levels, 33rd European Conference on Visual Perception, Lausanne, Switzerland, Perception, 39(ECVP Abstract Supplement) 12.
de la Rosa S und Chatziastros A (Juli-28-2009) Abstract Talk: What Is the speed of perceptual processes underlying joint-action recognition?, 3rd Joint Action Meeting (JAM 2009), Amsterdam, The Netherlands 23.
de la Rosa S, Schneider B und Moraglia G (Juli-25-2008) Abstract Talk: Binocular unmasking is important for the detection, categorization and identification of noise-masked real-life objects, XXIX. International Congress of Psychology (ICP 2008), Berlin, Germany, International Journal of Psychology, 29(3-4) 766.
de la Rosa S (Januar-28-2008) Invited Lecture: The perception of social interaction, University of Toronto Mississauga: Human Communication Lab Talks, Mississauga, ON, Canada.
Seite:  
1, 2, 3

Export als:
BibTeX, XML, pubman, Edoc, RTF
Last updated: Montag, 22.05.2017