Motion Perception & Simulation

Motion Perception & Simulation

The subjective experience of locomotion, i.e. the displacement of a human observer through the environment, is what we call self-motion. To fully comprehend this pervasive experience, we take a two-fold approach: on the one hand, we carry out fundamental research to investigate human perception of self-motion; on the other hand, we develop state-of-the-art motion simulation technologies and algorithms. Ultimately, these two research directions build upon each other [20]: the more detailed our knowledge on the mechanisms behind perception, the higher the fidelity that can be achieved in simulations; and the more realistic the simulations, the more advanced research can be conducted on motion perception.


Research Goals

Our fundamental research investigates both the low-level processes of uni- and multi-sensory visual/inertial motion perception, and the high-level abstract representations of self-motion, including the conscious experience of, and cognitive response to it. Low-level research allow us to describe the relation between actual and perceived motion characteristics [4,5,12,13,15,16]; whereas through high-level research we can better understand the causes of motion sickness [18], predict the subjective assessment of motion simulation fidelity [1,2,20,21], and improve the operator performance of remotely controlled vehicles [10,11]. Our applied research on simulation technologies aims at developing ecologically valid virtual environments to achieve a high fidelity simulation of self-motion. First, we work on the creation of visual environments that are used in our experiments or as development tools. Second, we explore ways to make optimal use of a motion simulator's capabilities to provide the simulator user with a realistic motion experience while accounting for the physical limits of the simulator [8,9,19].


Methods and Facilities

In our low-level research on motion perception, we present participants with elementary motion stimuli using various (adaptive) psychophysical paradigms. Research on perceptual thresholds typically involves n-Alternative Forced Choice tasks, in which participants select one of n sequentially presented motion stimuli on the basis of a criterion of interest [4-7,12-17]. In addition, we are using functional Near-InfraRed Spectroscopy (fNIRS) as a means to obtain direct readings of cortical hemodynamic activity, which may provide an additional dimension of experimental evidence [7]. In our high-level research, we seek to determine qualities of conscious experience, such as perceived simulation fidelity and workload, for complex scenarios with high ecological validity. As stimuli, we present for example virtual driving/flying scenarios, and we have 'played back' visual-inertial recordings of actual car driving and helicopter flight. For data collection in these experiment we have adopted questionnaires, we have adapted magnitude estimation methods and developed new ones (i.e., 'continuous rating') [1-3,20-23].

For the majority of our studies, we rely on the MPI CyberMotion Simulator, a dynamic motion platform for immersive virtual environments and vehicle simulation. We also operate the MPI CableRobot Simulator, the world's first cable robot for passengers. Furthermore, we use the CyberPod, a mid-size hexapod motion simulator. These platforms provide a range of possibilities in terms of inertial stimulation, and are often used in combination with a variety of visualization tools to provide immersive virtual environments. For work on visual motion we also make use of the PanoLab, the BackPro and purpose-built VR setups.


Selected Results

We have identified functions that relate the perceived characteristics of motion stimuli to their actual characteristics, such as velocity, acceleration, and heading [4,5,6,12,13], and we have determined absolute and differential thresholds for yaw, roll, and heading [7,14-16].

We have investigated how perceptions of any particular characteristic are affected by the presence of other stimuli [15-17] and how multiple sensory systems interact [5,6] (Figure 1).

We have determined how visual motion contributes to the illusory perception of physical motion (vection) and how it might explain motion sickness [18].

We have measured the perceived quality (visual/inertial motion incongruence) of motion cueing solutions and created several models to effectively predict it [2,20,21,23].

We have shown how teleoperation performance benefit from the addition of a motion feedback channel, informing the operator about the vehicle state and task-relevant motion [10,11].

We have developed software tools for various research projects on motion simulation. In particular, we have created an Off-line Motion Simulation Framework (OMSF), which is used to calculate offline control inputs (optimal motion trajectories) for any motion simulator, given its dynamics and constraints, and predefined trajectories.

We have also created a software library that serves as template for model-predictive control (tmpc) applications [8,9]. Implementations of both these software are currently in use in all our simulators [1,3,9,22].

We have developed QVis, which is a tool to visualize motion trajectories and motion cueing solutions in different simulators (Figure 2). This tool was used in several collaborations with industrial partners. QVis has also been adapted into QBrain for visualization of cortical hemodynamics from fNIRS-data (Figure 2).

We have designed, implemented and tested a 'motion teleoperation' setup, to control an unmanned aerial vehicle from within a motion simulator, which creates additional feedback using motion information in addition to the usual visual feedback [10,11].

We have also designed a 'theoretical driving simulator' [19], which can be used to find the minimal requirements for a simulator to replicate any desired trajectory.

We have recently built what we call an 'Alternative Reality' system, with the goal of manipulating our participants' actual surroundings with the highest possible degree of ecological validity.


Collaborations

  • Cybernetics Approach to Perception and Action
  • Cognition & Control in Human-Machine Systems
  • Jean-Pierre Bresciani - Université de Fribourg (Fribourg, Switzerland)
  • Moritz Diehl - Universität Freiburg (Freiburg, Germany)
  • Heiko Hecht - Johannes Gutenberg-Universität Mainz (Mainz, Germany)
  • Max Mulder - Technische Universiteit Delft (Delft, the Netherlands)
  • Fabio Solari - Università Degli Studi Di Genova (Genoa, Italy)
  • Andreas Zell - Universität Tübingen (Tübingen, Germany)
  • Hasan Ayaz - Drexel University (Philadelphia, PA, USA)
Go to Editor View