Projektleiter

Betty Mohler, PhD
Tel: 07071 601-217
Fax: 07071 601-616
betty.mohler[at]tuebingen.mpg.de
 
Martin Dobricki, Dr. Phil.
Tel: 07071 601-215
Fax: 07071 601-616
Opens window for sending emailmartin.dobricki[at]tuebingen.mpg.de
 

PAVE-Poster


Neuigkeiten

 
 
 

Aktuellste Veröffentlichungen

Meilinger T, Franz G und Bülthoff HH (Januar-2012) From Isovists via Mental Representations to Behaviour: First Steps Toward Closing the Causal Chain Environment and Planning B: Planning and Design 39(1) 48-62.
pdf

Export als:
BibTeX, XML, pubman, Edoc, RTF

 

Wahrnehmen und Handeln in virtuellen Umgebungen

Betty Mohler sieht ihren eigenanimierten Avatar in ihrem Head-Mounted Display.
Betty Mohler sieht ihren eigenanimierten Avatar in ihrem Head-Mounted Display.
Ziel der Forschungsgruppe “Wahrnehmen und Handeln in virtuellen Umgebungen” ist die Erforschung menschlicher Wahrnehmung, Kognition und Verhaltens in natürlicher Umgebung. Hierfür bedienen wir uns realitätsgetreuer und mit vielen Sinnen erfahrbarer virtueller Welten (virtual reality, VR). Dies ermöglicht es uns einerseits sensorische Reize in einer kontrollierten Umgebung zu präsentieren als auch sie in einer Art und Weise zu verändern wie das in der realen Welt nicht möglich wäre.

Im Speziellen ermöglicht unsere hochmoderne VR-Technologie den sichtbaren Körper, den Inhalt der virtuellen Welt sowie sensorische Reize (visuell, vestibulär, kinästhetisch, taktil und auditorisch) während dem Wahrnehmen oder Handeln zu verändern. Unsere Forschungsgruppe konzentriert sich auf verschiedene Forschungsfragen, bezieht sich jedoch immer auf die Messung menschlicher Leistungsfähigkeit in komplexen, alltäglichen Situationen, z.B. beim Gehen, Fahren, Kommunizieren oder während der räumlichen Orientierung. Wir untersuchen die Auswirkung eines animierten, den Nutzer repräsentierenden Avatars auf die räumliche Wahrnehmung, die Kommunikation oder das Gefühl einen bzw. einen bestimmten Körper zu haben. Wir interessieren uns dafür, wie sich andere Avatare auf Leistung, Emotions-Wahrnehmung, Lernen und Training sowie die visuelle und körperliche Kontrolle von Fortbewegungsprozessen auswirkt. Außerdem erforschen wir wie sich Menschen in alltäglichen Umwelten wie Gebäuden oder Städten orientieren und wie sie diese im Gedächtnis repräsentieren. Zusammengefasst arbeitet unsere Forschungsgruppe daran, menschliches Verhalten, Wahrnehmung und Kognition komplexer Alltagsprozesse besser zu verstehen. Dazu nutzen und verbessern wir modernste VR-Technologien.

Main research areas

Visual body influences perception:
Seeing a virtual avatar in the virtual environment influences egocentric distance estimates. If this avatar is a self-animated avatar, egocentric distances are even more influenced (Mohler, Presence, 2010).  Eye-height influences egocentric space and dimension estimates in virtual environments (Leyrer, APGV 2011).  Seeing a virtual character (self or other) impacts subsequent performance of common tasks in virtual environments (McManus, supervised by Mohler, APGV 2011).  The size of visual body parts (hands/arm length) influences size and distance estimates in virtual worlds (Linkenauger, ECVP and VSS 2011).  These results taken together argue that the body plays a central role in the perception of our surrounding environment.
 
The role of visual body information in human interaction and communication:
Current state-of-the-art in motion capture tracking enables scientists to animate avatars with multiple participant’s body motion in real time. We have used this technology to conduct experiments investigating the role of body language on successful communication and interaction. We have found that body language is important for successful communication in a word-communication task and that both the speaker’s and the listener’s body movements (as seen through animated avatars) impacts communication (Dodds, CASA, 2010).  We have further shown that people move more if they are wearing the xSens Moven suits and using large-screen projection technology as compared to when they are wearing Vicon rigid body tracking objects and viewing the virtual world in a low field-of-view head-mounted display (Dodds, PLoS One 2011). We have also investigated the role of the visual information of the interaction partner on task performance in a table-tennis paradigm. We have shown that the social context (competitive or cooperative) mediates the use of visual information about the interaction partner (Streuber, EBR 2011). We have also used motion capture technology to investigate the use of VR for medical training (Alexandrova CASA, 2011) and the emotional expression of body language (Volkova, IMRF, 2011).
 
Self-motion perception while walking and reaching:
We have conducted studies to investigate the sensory contribution to encoding walking velocity (visual, vestibular, proprioceptive, efferent copy) and have found a new measure for self-motion perception: active pointing trajectory (Campos, PLoS One, 2009). We have further demonstrated that imagined walking is different than physical walking, in that participants point in a way that indicates that they are not simulating all of their sensory information for walking when imagining walking. Additionally, we have investigated human’s ability to detect when they are walking on a curved path and the influence of walking speed on curvature sensitivity. We have found that walking speed does influence curvature sensitivity, showing that when walking at a slower velocity people are less sensitive to walking on a curve. We exploit this perceptual knowledge and designed a dynamic gain controller for redirected walking, which enables participants to walk unaided in a virtual city (Neth, IEEE-VR 2011).  Finally, we have investigated motor learning in for reaching given different viewpoints and different visual realism of the arm and environment and make suggestions for the use of VR for rehabilitation and motor-learning experiments (Shomaker, Tesch, Buelthoff & Bresciani, EBR 2011).
 
Spatial perception and cognition:
Visiting Prof. Roy Ruddle investigated the role of body-based information on spatial navigation. He found that walking improves humans cognitive map in large virtual worlds (Ruddle, ToCHI 2011) and he investigated the role of body-based information and landmarks on route knowledge (Ruddle, Memory & Cognition 2011).  We have also found that pointing to locations within one’s city of residence relies on a single north-oriented reference frame likely learned from maps [Frankenstein, PsychScience in press]. Without maps available navigators primarily memorize a novel space as local interconnected reference frames corresponding to a corridor or street [Meilinger 2010 and Hensen, supervised by Meilinger 2011 Cog Sci,]. Consistent with these results, entorhinal grid cells in humans quickly remap their grid orientation after changing the surrounding environment (Pape, supervised by Melinger SfN 2011). Additionally, we have found that egocentric distance estimates are also underestimated in large screen displays, and are influenced by the distance to the screen (Alexandrova, APGV 2010).

Selected Publications

35. Mohler B, Thompson WB, Creem-Regehr SH, Pick HL, Scholes J, Rieser JJ und Willemsen P (August-2004) Visual Motion Influences Locomotion in a Treadmill Virtual Environment, 1st Symposium on Applied Perception in Graphics and Visualization (APGV 2004), ACM Press, New York, NY, USA, 19-22.
CiteID: 4549
34. Creem-Regehr SH, Mohler B und Thompson WB (August-2004): Perceived Slant is Greater from Far versus Near Distances, Fourth Annual Meeting of the Vision Sciences Society (VSS 2004), Sarasota, FL, USA, Journal of Vision, 4(8) 374.
CiteID: 4707
33. Mohler B, Thompson WB, Creem-Regehr SH, Willemsen P, Rieser JJ und Scholes J (August-2004): Perceptual-Motor Recalibration on a Virtual Reality Treadmill, Fourth Annual Meeting of the Vision Sciences Society (VSS 2004), Sarasota, FL, USA, Journal of Vision, 4(8) 794.
CiteID: 4708
32. Bresciani J-P, Ernst MO, Drewing K, Bouyer G, Maury V und Kheddar A (Juni-2004) Auditory modulation of tactile taps perception, 4th International Conference EuroHaptics 2004, Institute of Automatic Control Engineering, München, Germany, 198-202.
pdfCiteID: 2890
31. Mohler B (Mai-2004): Computer graphics research at the University of Utah: What you should know about graduate school, Host: Dr. Roger Webster.
CiteID: 5582
30. Mohler B (Mai-2004): Interactions between visual information for self-motion and locomotion, Host: Dr. Jack Loomis.
CiteID: 5580
29. Mohler B (Mai-2004): Interactions between visual information for self-motion and locomotion, Host: Dr. Dennis Proffitt, Psychology Department.
CiteID: 5581
28.
Totzke I, Krüger H-P, Hofmann M, Meilinger T, Rauch N und Schmidt G: Kompetenzerwerb für Informationssysteme: Einfluss des Lernprozesses auf die Interaktion mit Fahrerinformationssystemen, 169, Verband der Automobilindustrie, Berlin, Germany, (April-2004). , Series: FAT-Schriftenreihe ; 184
CiteID: 2541
27. Meilinger T und Knauff M (April-2004): Nach dem Weg fragen oder Karte studieren, was ist besser? Ein Feldexperiment, 46. Tagung Experimentell Arbeitender Psychologen (TeaP 2004), Giessen, Germany, Experimentelle Psychologie, 46 169.
CiteID: 2681
26. Bresciani J-P, Ernst MO, Drewing K, Bouyer G, Maury V und Kheddar A (Februar-2004): Feeling What You Hear: An Auditory-Evoked Tactile Illusion, 7th Tübingen Perception Conference (TWK 2004), Tübingen, Germany.
CiteID: 2513
25. Sarlegna F, Blouin J, Vercher J-L, Bresciani JP, Bourdin C und Gauthier GM (2004) Online control of the direction of rapid reaching movements Experimental Brain Research 157 468-471.
pdfCiteID: 2734
24. Blouin J, Bresciani J-P und Gauthier GM (2004) Shifts in the retinal image of a visual scene during saccades contribute to the perception of reached gaze direction in humans Neuroscience Letters 357 29-32.
pdfCiteID: 2455
Seite:  
1, ... , 13, 14, 15, 16, 17, 18

Export als:
BibTeX, XML, pubman, Edoc, RTF
Last updated: Montag, 01.09.2014