Projektleiter

Betty Mohler, PhD
Tel: 07071 601-217
Fax: 07071 601-616
betty.mohler[at]tuebingen.mpg.de
 
Martin Dobricki, Dr. Phil.
Tel: 07071 601-215
Fax: 07071 601-616
Opens window for sending emailmartin.dobricki[at]tuebingen.mpg.de
 

PAVE-Poster


Neuigkeiten

 
 
 

Aktuellste Veröffentlichungen

Meilinger T, Franz G und Bülthoff HH (Januar-2012) From Isovists via Mental Representations to Behaviour: First Steps Toward Closing the Causal Chain Environment and Planning B: Planning and Design 39(1) 48-62.
pdf

Export als:
BibTeX, XML, pubman, Edoc, RTF

 

Wahrnehmen und Handeln in virtuellen Umgebungen

Betty Mohler sieht ihren eigenanimierten Avatar in ihrem Head-Mounted Display.
Betty Mohler sieht ihren eigenanimierten Avatar in ihrem Head-Mounted Display.
Ziel der Forschungsgruppe “Wahrnehmen und Handeln in virtuellen Umgebungen” ist die Erforschung menschlicher Wahrnehmung, Kognition und Verhaltens in natürlicher Umgebung. Hierfür bedienen wir uns realitätsgetreuer und mit vielen Sinnen erfahrbarer virtueller Welten (virtual reality, VR). Dies ermöglicht es uns einerseits sensorische Reize in einer kontrollierten Umgebung zu präsentieren als auch sie in einer Art und Weise zu verändern wie das in der realen Welt nicht möglich wäre.

Im Speziellen ermöglicht unsere hochmoderne VR-Technologie den sichtbaren Körper, den Inhalt der virtuellen Welt sowie sensorische Reize (visuell, vestibulär, kinästhetisch, taktil und auditorisch) während dem Wahrnehmen oder Handeln zu verändern. Unsere Forschungsgruppe konzentriert sich auf verschiedene Forschungsfragen, bezieht sich jedoch immer auf die Messung menschlicher Leistungsfähigkeit in komplexen, alltäglichen Situationen, z.B. beim Gehen, Fahren, Kommunizieren oder während der räumlichen Orientierung. Wir untersuchen die Auswirkung eines animierten, den Nutzer repräsentierenden Avatars auf die räumliche Wahrnehmung, die Kommunikation oder das Gefühl einen bzw. einen bestimmten Körper zu haben. Wir interessieren uns dafür, wie sich andere Avatare auf Leistung, Emotions-Wahrnehmung, Lernen und Training sowie die visuelle und körperliche Kontrolle von Fortbewegungsprozessen auswirkt. Außerdem erforschen wir wie sich Menschen in alltäglichen Umwelten wie Gebäuden oder Städten orientieren und wie sie diese im Gedächtnis repräsentieren. Zusammengefasst arbeitet unsere Forschungsgruppe daran, menschliches Verhalten, Wahrnehmung und Kognition komplexer Alltagsprozesse besser zu verstehen. Dazu nutzen und verbessern wir modernste VR-Technologien.

Main research areas

Visual body influences perception:
Seeing a virtual avatar in the virtual environment influences egocentric distance estimates. If this avatar is a self-animated avatar, egocentric distances are even more influenced (Mohler, Presence, 2010).  Eye-height influences egocentric space and dimension estimates in virtual environments (Leyrer, APGV 2011).  Seeing a virtual character (self or other) impacts subsequent performance of common tasks in virtual environments (McManus, supervised by Mohler, APGV 2011).  The size of visual body parts (hands/arm length) influences size and distance estimates in virtual worlds (Linkenauger, ECVP and VSS 2011).  These results taken together argue that the body plays a central role in the perception of our surrounding environment.
 
The role of visual body information in human interaction and communication:
Current state-of-the-art in motion capture tracking enables scientists to animate avatars with multiple participant’s body motion in real time. We have used this technology to conduct experiments investigating the role of body language on successful communication and interaction. We have found that body language is important for successful communication in a word-communication task and that both the speaker’s and the listener’s body movements (as seen through animated avatars) impacts communication (Dodds, CASA, 2010).  We have further shown that people move more if they are wearing the xSens Moven suits and using large-screen projection technology as compared to when they are wearing Vicon rigid body tracking objects and viewing the virtual world in a low field-of-view head-mounted display (Dodds, PLoS One 2011). We have also investigated the role of the visual information of the interaction partner on task performance in a table-tennis paradigm. We have shown that the social context (competitive or cooperative) mediates the use of visual information about the interaction partner (Streuber, EBR 2011). We have also used motion capture technology to investigate the use of VR for medical training (Alexandrova CASA, 2011) and the emotional expression of body language (Volkova, IMRF, 2011).
 
Self-motion perception while walking and reaching:
We have conducted studies to investigate the sensory contribution to encoding walking velocity (visual, vestibular, proprioceptive, efferent copy) and have found a new measure for self-motion perception: active pointing trajectory (Campos, PLoS One, 2009). We have further demonstrated that imagined walking is different than physical walking, in that participants point in a way that indicates that they are not simulating all of their sensory information for walking when imagining walking. Additionally, we have investigated human’s ability to detect when they are walking on a curved path and the influence of walking speed on curvature sensitivity. We have found that walking speed does influence curvature sensitivity, showing that when walking at a slower velocity people are less sensitive to walking on a curve. We exploit this perceptual knowledge and designed a dynamic gain controller for redirected walking, which enables participants to walk unaided in a virtual city (Neth, IEEE-VR 2011).  Finally, we have investigated motor learning in for reaching given different viewpoints and different visual realism of the arm and environment and make suggestions for the use of VR for rehabilitation and motor-learning experiments (Shomaker, Tesch, Buelthoff & Bresciani, EBR 2011).
 
Spatial perception and cognition:
Visiting Prof. Roy Ruddle investigated the role of body-based information on spatial navigation. He found that walking improves humans cognitive map in large virtual worlds (Ruddle, ToCHI 2011) and he investigated the role of body-based information and landmarks on route knowledge (Ruddle, Memory & Cognition 2011).  We have also found that pointing to locations within one’s city of residence relies on a single north-oriented reference frame likely learned from maps [Frankenstein, PsychScience in press]. Without maps available navigators primarily memorize a novel space as local interconnected reference frames corresponding to a corridor or street [Meilinger 2010 and Hensen, supervised by Meilinger 2011 Cog Sci,]. Consistent with these results, entorhinal grid cells in humans quickly remap their grid orientation after changing the surrounding environment (Pape, supervised by Melinger SfN 2011). Additionally, we have found that egocentric distance estimates are also underestimated in large screen displays, and are influenced by the distance to the screen (Alexandrova, APGV 2010).

Selected Publications

59. Meilinger T, Knauff M und Bülthoff HH (Juli-2006) Working memory in wayfinding: a dual task experiment in a virtual city, 28th Annual Conference of the Cognitive Science Society (CogSci 2006), 5th International Conference of the Cognitive Science Society (ICCS 2006), Curran, Red Hook, NY, USA, 585-590.
pdfCiteID: 3855
58. Meilinger T (Juni-21-2006) Invited Lecture: Memory for environmental spaces, Albert-Ludwigs-Universität Freiburg: Center for Cognitive Science, Freiburg i. Br., Germany.
CiteID: Meilinger2006_2
57. Ernst MO, Bresciani J-P und Dammeier F (Juni-20-2006) Abstract Talk: Vision and touch are automatically integrated for the perception of sequences of events, 7th International Multisensory Research Forum (IMRF 2006), Dublin, Ireland(191).
CiteID: ErnstBD2006
56. Meilinger T (Juni-7-2006) Invited Lecture: Memory for environmental spaces, Universität Tübingen: Kognitive Neurowissenschaft, Tübingen, Germany.
CiteID: Meilinger2006_3
55. Mohler B, Thompson WB und Creem-Regehr SH (Juni-2006): Absolute egocentric distance judgments are improved after motor and cognitive adaptation within HMD, 6th Annual Meeting of the Vision Sciences Society (VSS 2006), Sarasota, FL, USA, Journal of Vision, 6(6) 725.
CiteID: 4704
54. Meilinger T (Juni-2006): Orientation with maps: memory systems, memory content and strategies, 29th Annual German Conference on Artificial Intelligence (KI 2006), Bremen, Germany.
CiteID: 4078
53. Bresciani J-P, Dammeier F und Ernst MO (April-2006) Vision and touch are automatically integrated for the perception of sequences of events Journal of Vision 6(5) 554-564.
pdfCiteID: 3787
52. Meilinger T (März-4-2006) Invited Lecture: Psychophysik der Mensch-Maschine-Schnittstelle, Aesculap AG, Tuttlingen, Germany.
CiteID: Meilinger2006_4
51. Meilinger T, Widinger A, Knauff M und Bülthoff HH (März-2006): Verbal, Visual and Spatial Memory in Wayfinding, 9th Tübingen Perception Conference (TWK 2006), Tübingen, Germany.
CiteID: 4841
50. Büchner SJ, Hölscher C und Meilinger T (März-2006) Abstract Talk: "Wie komm' ich da jetzt hin?": Der Einfluss von Navigationshilfen und Strategiewahl auf das Bewegungsverhalten in einem komplexen Gebäude, 48. Tagung Experimentell Arbeitender Psychologen (TeaP 2006), Mainz, Germany 140.
CiteID: 3897
49. Totzke I, Meilinger T und Krüger H-P (September-2005) Adaptivität und Adaptierbarkeit von menügesteuerten Informationssystemen - Kein Ansatz zur Lösung des Problems der Erlernbarkeit?! In: Zustandserkennung und Systemgestaltung. 6. Berliner Werkstatt Mensch-Maschine-Systeme, , Zustandserkennung und Systemgestaltung., VDI-Verlag, Düsseldorf, Zustandserkennung und Systemgestaltung., 35-40.
CiteID: 3636
48. Thompson WB, Mohler B und Creem-Regehr SH (September-2005): Does perceptual-motor recalibration of locomotion depend on perceived self motion or the magnitude of optical flow?, Fifth Annual Meeting of the Vision Sciences Society (VSS 2005), Sarasota, FL, USA, Journal of Vision, 5(8) 386.
CiteID: 4705
Seite:  
1, ... , 12, 13, 14, 15, 16, ... , 18

Export als:
BibTeX, XML, pubman, Edoc, RTF
Last updated: Montag, 01.09.2014