Betty Mohler, PhD
Tel: 07071 601-217
Fax: 07071 601-616
Martin Dobricki, Dr. Phil.
Tel: 07071 601-215
Fax: 07071 601-616
Opens window for sending emailmartin.dobricki[at]




Aktuellste Veröffentlichungen

Mohler B (Dezember-12-2013) Invited Lecture: Perception of indoor and outdoor virtual spaces, 5th Joint Virtual Reality Conference (JVRC 2013), Paris, France.
Mohler B, Raffin B, Saito H und Staadt O: 5th Joint Virtual Reality Conference, 5th Joint Virtual Reality Conference (JVRC '13), 94, ACM Press, New York, NY, USA, (Dezember-2013).
Dobricki M und de la Rosa S (Dezember-2013) The structure of conscious bodily self-perception during full-body illusions PLoS ONE 8(12) 1-9.
Heydrich L, Dodds TJ, Aspell JE, Herbelin B, Bülthoff HH, Mohler BJ und Blanke O (Dezember-2013) Visual capture and the experience of having two bodies: evidence from two different virtual reality techniques Frontiers in Psychology 4(946) 1-15.
Meilinger T (November-12-2013) Invited Lecture: Spatial cognition: different processes for different kinds of spaces, Centre National de la Recherche Scientifique: Centre de Recherche Cerveau & Cognition, Toulouse, France.

Export als:
BibTeX, XML, pubman, Edoc, RTF


Wahrnehmen und Handeln in virtuellen Umgebungen

Betty Mohler sieht ihren eigenanimierten Avatar in ihrem Head-Mounted Display.
Betty Mohler sieht ihren eigenanimierten Avatar in ihrem Head-Mounted Display.
Ziel der Forschungsgruppe “Wahrnehmen und Handeln in virtuellen Umgebungen” ist die Erforschung menschlicher Wahrnehmung, Kognition und Verhaltens in natürlicher Umgebung. Hierfür bedienen wir uns realitätsgetreuer und mit vielen Sinnen erfahrbarer virtueller Welten (virtual reality, VR). Dies ermöglicht es uns einerseits sensorische Reize in einer kontrollierten Umgebung zu präsentieren als auch sie in einer Art und Weise zu verändern wie das in der realen Welt nicht möglich wäre.

Im Speziellen ermöglicht unsere hochmoderne VR-Technologie den sichtbaren Körper, den Inhalt der virtuellen Welt sowie sensorische Reize (visuell, vestibulär, kinästhetisch, taktil und auditorisch) während dem Wahrnehmen oder Handeln zu verändern. Unsere Forschungsgruppe konzentriert sich auf verschiedene Forschungsfragen, bezieht sich jedoch immer auf die Messung menschlicher Leistungsfähigkeit in komplexen, alltäglichen Situationen, z.B. beim Gehen, Fahren, Kommunizieren oder während der räumlichen Orientierung. Wir untersuchen die Auswirkung eines animierten, den Nutzer repräsentierenden Avatars auf die räumliche Wahrnehmung, die Kommunikation oder das Gefühl einen bzw. einen bestimmten Körper zu haben. Wir interessieren uns dafür, wie sich andere Avatare auf Leistung, Emotions-Wahrnehmung, Lernen und Training sowie die visuelle und körperliche Kontrolle von Fortbewegungsprozessen auswirkt. Außerdem erforschen wir wie sich Menschen in alltäglichen Umwelten wie Gebäuden oder Städten orientieren und wie sie diese im Gedächtnis repräsentieren. Zusammengefasst arbeitet unsere Forschungsgruppe daran, menschliches Verhalten, Wahrnehmung und Kognition komplexer Alltagsprozesse besser zu verstehen. Dazu nutzen und verbessern wir modernste VR-Technologien.

Main research areas

Visual body influences perception:
Seeing a virtual avatar in the virtual environment influences egocentric distance estimates. If this avatar is a self-animated avatar, egocentric distances are even more influenced (Mohler, Presence, 2010).  Eye-height influences egocentric space and dimension estimates in virtual environments (Leyrer, APGV 2011).  Seeing a virtual character (self or other) impacts subsequent performance of common tasks in virtual environments (McManus, supervised by Mohler, APGV 2011).  The size of visual body parts (hands/arm length) influences size and distance estimates in virtual worlds (Linkenauger, ECVP and VSS 2011).  These results taken together argue that the body plays a central role in the perception of our surrounding environment.
The role of visual body information in human interaction and communication:
Current state-of-the-art in motion capture tracking enables scientists to animate avatars with multiple participant’s body motion in real time. We have used this technology to conduct experiments investigating the role of body language on successful communication and interaction. We have found that body language is important for successful communication in a word-communication task and that both the speaker’s and the listener’s body movements (as seen through animated avatars) impacts communication (Dodds, CASA, 2010).  We have further shown that people move more if they are wearing the xSens Moven suits and using large-screen projection technology as compared to when they are wearing Vicon rigid body tracking objects and viewing the virtual world in a low field-of-view head-mounted display (Dodds, PLoS One 2011). We have also investigated the role of the visual information of the interaction partner on task performance in a table-tennis paradigm. We have shown that the social context (competitive or cooperative) mediates the use of visual information about the interaction partner (Streuber, EBR 2011). We have also used motion capture technology to investigate the use of VR for medical training (Alexandrova CASA, 2011) and the emotional expression of body language (Volkova, IMRF, 2011).
Self-motion perception while walking and reaching:
We have conducted studies to investigate the sensory contribution to encoding walking velocity (visual, vestibular, proprioceptive, efferent copy) and have found a new measure for self-motion perception: active pointing trajectory (Campos, PLoS One, 2009). We have further demonstrated that imagined walking is different than physical walking, in that participants point in a way that indicates that they are not simulating all of their sensory information for walking when imagining walking. Additionally, we have investigated human’s ability to detect when they are walking on a curved path and the influence of walking speed on curvature sensitivity. We have found that walking speed does influence curvature sensitivity, showing that when walking at a slower velocity people are less sensitive to walking on a curve. We exploit this perceptual knowledge and designed a dynamic gain controller for redirected walking, which enables participants to walk unaided in a virtual city (Neth, IEEE-VR 2011).  Finally, we have investigated motor learning in for reaching given different viewpoints and different visual realism of the arm and environment and make suggestions for the use of VR for rehabilitation and motor-learning experiments (Shomaker, Tesch, Buelthoff & Bresciani, EBR 2011).
Spatial perception and cognition:
Visiting Prof. Roy Ruddle investigated the role of body-based information on spatial navigation. He found that walking improves humans cognitive map in large virtual worlds (Ruddle, ToCHI 2011) and he investigated the role of body-based information and landmarks on route knowledge (Ruddle, Memory & Cognition 2011).  We have also found that pointing to locations within one’s city of residence relies on a single north-oriented reference frame likely learned from maps [Frankenstein, PsychScience in press]. Without maps available navigators primarily memorize a novel space as local interconnected reference frames corresponding to a corridor or street [Meilinger 2010 and Hensen, supervised by Meilinger 2011 Cog Sci,]. Consistent with these results, entorhinal grid cells in humans quickly remap their grid orientation after changing the surrounding environment (Pape, supervised by Melinger SfN 2011). Additionally, we have found that egocentric distance estimates are also underestimated in large screen displays, and are influenced by the distance to the screen (Alexandrova, APGV 2010).

Selected Publications

214. Strickrodt M und Meilinger T (November-9-2015) Movement, successive presentation and environmental structure and their influence on spatial memory in vista and environmental space, Conference on Human Mobility, Cognition and GISc, University of Copenhagen: Department of Geosciences and Natural Resource Management, Copenhagen, Denmark, 33-34.
CiteID: StrickrodtM2015
213. Mohler B (Mai-3-2015) Invited Lecture: Virtual Reality and Spatial/Body Perception, TEDx Jacobs University, Bremen, Germany.
CiteID: Mohler2015_3
212. Volkova E und Mohler BJ (Mai-27-2014) On-line Annotation System and New Corpora for Fine Grained Sentiment Analysis of Text, 5th International Workshop on Emotion, Social Signals, Sentiment & Linked Open Data (ES³LOD 2014), Satellite of LREC 2014 ELRA, 74-81.
CiteID: VolkovaM2014
211. Meilinger T (Dezember-16-2010) Invited Lecture: How do we memorize everyday environments and use this memory for pointing and selecting routes?, Albert-Ludwigs-Universität Freiburg: Center for Cognitive Science, Freiburg i. Br., Germany.
CiteID: Meilinger2010_2
210. Alaimo SMC, Pollini L, Bresciani J-P und Bülthoff HH (Oktober-28-2010) Abstract Talk: Augmented Human-Machine Interface: Providing a Novel Haptic Cueing to the Tele-Operator, 3rd Workshop for Young Researchers on Human-Friendly Robotics (HFR 2010), Tübingen, Germany.
pdfCiteID: 6842
209. Dodds TJ (Oktober-1-2010): Communication in Virtual Environments, 2010 Joint Virtual Reality Conference of EuroVR - EGVE - VEC (JVRC 2010), Stuttgart, Germany.
CiteID: 6998
208. Alexandrova IV, Volkova EP, Kloos U, Bülthoff HH und Mohler BJ (Oktober-2010) Virtual Storyteller in Immersive Virtual Environments Using Fairy Tales Annotated for Emotion States In: Virtual Environments 2010, , 2010 Joint Virtual Reality Conference of EuroVR - EGVE - VEC (JVRC 2010), Eurographics Association, Goslar, Germany, 65-68.
pdfCiteID: 6682
207. Neth CT, Souman JL, Engel D, Kloos U, Bülthoff HH und Mohler BJ (Oktober-2010): Velocity-dependent curvature gain and avatar use for Redirected Walking, 2010 Joint Virtual Reality Conference of EuroVR - EGVE - VEC (JVRC 2010), Stuttgart, Germany.
pdfCiteID: 6755
206. Volkova E (Oktober-2010): Virtual Storytelling of Fairy Tales: Towards Simulation of Emotional Perception of Text, 11th Conference of Junior Neuroscientists of Tübingen (NeNa 2010), Heiligkreuztal, Germany.
CiteID: 7088
205. Volkova EP: PETaLS: Perception of Emotions in Text - a Linguistic Simulation, Eberhards-Karls-Universität Tübingen, Germany, (Oktober-2010). Diplom thesis
CiteID: 6821
204. Mohler BJ (September-30-2010) Invited Lecture: Self-Avatars in Immersive Virtual Environments as a Tool to Investigate Embodied Perception, Body Representation in Physical and Virtual Reality with Application to Rehabilitation, Monte Veritá, Switzerland.
CiteID: 6785
203. Witt JK, Kemmerer D, Linkenauger SA und Culham J (September-2010) A Functional Role for Motor Simulation in Identifying Tools Psychological Science 21(9) 1215-1219.
CiteID: WittKLC2010
1, 2, 3, 4, 5, 6, ... , 18

Export als:
BibTeX, XML, pubman, Edoc, RTF
Last updated: Montag, 01.09.2014