Trevor Dodds

Alumni of the Department Human Perception, Cognition and Action
Alumni of the Group Perception and Action in Virtual Environments
Alumni of the Research Group Body and Space perception

Main Focus

My research focuses on collaborative interaction in virtual environments. Multiple users can occupy a shared three-dimensional environment, either collocated or across a network (e.g. the Internet). Such an environment enables people to use shared data, to see the actions of others and communicate. Virtual environments have applications such as design, training and urban planning (see for more details). Virtual environments also enable people to interact in ways that are not possible in the 'real world'.

My previous work developed techniques known as mobile group dynamics, that improved teamwork in collaborative virtual environments (see ). Further, I developed a concept called virtual time, that allowed multiple users to interact asynchronously, i.e. they were not simultaneously present in the environment.

My most recent projects focus on how people communicate in virtual environments using sophisticated motion tracking technology. Motions of users are mapped onto self-avatars in real time. This can be used for (a) investigating nonverbal communication (gestures, body language, eye gaze) by systematically manipulating the avatar; and (b) to compare differences between face-to-face and avatar-mediated communication. Researching the former can help answer fundamental questions regarding interpersonal communication. The latter can help us understand what is important about face-to-face conversation, and we question whether it is possible to implement this using virtual reality technology, or if we can increase the effectiveness of technology mediated collaboration using other (non-naturalistic) methods. See for images of one of the communication experiments.

Introduction

Virtual environments (VEs) are of interest, (a) as objects of study themselves; (b) as applications for the real world, e.g. medical training [1], urban planning [2], collaboration; and (c) as tools with which one can research ‘real world’ experiences. Collaboration in virtual environments requires both interaction and communication with other parties involved. Here, we focus on the communication aspect. When we talk to each other face-to-face, body gestures naturally accompany our speech [3]. Using state-of-the-art motion capture tracking we can map the real-time body motion to a virtual character creating ‘self-animated’ avatars.

Goals

The goal of the present project is to research the effect of body gestures on communication in virtual reality. Further, we design experiments to gain insights about the role of body language in face-to-face communication in the real world.

Methods

Three studies analysed performance in a communication task. The first two studies used head-mounted display virtual reality. The third study used large-projection virtual environments. Participants worked in pairs. One participant in each pair, the ‘describer’, had to describe the meanings of words to their partner, the ‘guesser’, who had to infer the word being described. Our main comparison was between static and self-animated avatars.

Initial results

In the first study, we found that participants performed better in the communication task with bidirectional nonverbal communication in third-person perspective, i.e. participants were aware of their own avatar. Describers also gave up describing more words when their partner’s avatar was static, i.e. no nonverbal feedback available. In the second study, we further investigate the importance of nonverbal feedback, and found that participants performed worse when the guessing avatar was animated by a plausible but unintelligent pre-recorded animation [3]. In both studies, participants moved more during the same task in a real world face-to-face setting. In the third study, we replicated our previous results in a first-person perspective in a large-projection virtual environment. In this new scenario, participants moved as much as we previously recorded in the real world.

Initial conclusion

Taken together, the results show that nonverbal communication is beneficial to a word description task in virtual environments; that awareness of our own bodies is important for this benefit to take place; and that real nonverbal feedback cannot be substituted by an unintelligent animation. The latter finding is relevant to the generation of avatars that attempt to provide nonverbal feedback automatically. In addition, providing users with a scenario that enables them to move at similar levels as they do when talking face-to-face is important for making meaningful comparisons to interpersonal communication in the real world.

References

  1. Alexandrova IV, Rall M, Breidt M, Tullius G, Kloos C, Bülthoff HH and Mohler BJ (In press) Enhancing Medical Communication Training Using Motion Capture, Perspective Taking and Virtual Reality 19th Medicine Meets Virtual Reality Conference (MMVR 2012).

  2. Dodds TJ and Ruddle RA (2009) Using mobile group dynamics and virtual time to improve teamwork in large-scale collaborative virtual environments, Computers & Graphics 33(2) 130-138.

  3. McNeill (2007) Gesture & Thought, The University of Chicago Press, Chicago and London.

  4. Dodds TJ, Mohler BJ and Bülthoff HH (2011) Talk to the virtual hands: Self-animated avatars improve communication in head-mounted display virtual environments, PLoS ONE 6(10): e25759. doi:10.1371/journal.pone.0025759

  5. Dodds TJ, Mohler BJ, de la Rosa S, Streuber S and Bülthoff HH (2011) Embodied Interaction in Immersive Virtual Environments with Real Time Self-animated Avatars Workshop Embodied Interaction: Theory and Practice in HCI (CHI 2011), ACM Press, New York, NY, USA, 132-135.

  6. Dodds TJ, Mohler BJ and Bülthoff HH (2010) A Communication Task in HMD Virtual Environments: Speaker and Listener Movement Improves Communication 23rd Annual Conference on Computer Animation and Social Agents (CASA 2010).

Curriculum Vitae

Employment / experience

2012-2013 Project management board member, .
2012 Korea University, Seoul, Korea. Teaching: Developing perception experiments for the iPad with (C#).
2012 Program Committee member
2010 Max Planck Institute for Biological Cybernetics. Teaching: Introduction to Computer Graphics and Virtual Reality.

2009-Present

Postdoctoral Research Scientist at the Max Planck Institute for Biological Cybernetics. Human Perception, Cognition and Action department. Familiarity with Vicon Tracker, MVN motion capture suits, head-mounted displays (nVis SX60, SX111), Unity 3D, Middle VR, plus custom self-written software (C/C++/C#/Perl/Python(3)/Matlab/Gawk/Bash/R, git/svn, Linux/OS X/iOS/Windows)

2009-2009

University of Leeds, UK. Research Assistant: Software development for Heterogeneous Collaborative Visualization (Powerwall-to-desktop; C++).

2006-2008

University of Leeds, UK. Teaching: Algorithms and Complexity tutorials.

2006-2007

University of Leeds, UK. Schools Project: Developed software and teaching materials for schools (Java; development for mobile devices).

Education

2005-2009

Ph.D., Computer Science: Collaborative Interaction in Virtual Environments. School of Computing, University of Leeds, UK.

2002-2005

B.Sc. (Hons., 1st class) Computer Science, University of Leeds. Undergraduate dissertation: Collaborative Interaction in Virtual Environments. Received Doctoral Training Grant scholarship from the School of Computing to continue this topic as a Ph.D.

Go to Editor View