Research Facilities

Virtual Reality Facilities

Foto: Gehirn&Geist/Manfred Zentsch
In the Department Human Perception, Cognition and Action we study the human perception with the help of virtual reality (VR). This enables us to conduct our experiments in controlled and yet natural surroundings. For this we have special hardware and experimental constructions, which have been built by our guidelines as well as the corresponding software such as program libraries and databases.
Taking in account the most recent development in the field of VR and the arising opportunities the Cyberneum was built in 2004 - 2005. The research focuses on the interaction of different senses, the impact of the spatial environment on behavior and the interaction of perception and action. Two separate halls, the Tracking Lab and the Robo Lab, each 15 x 12 meters large, shape the main area of the research building. In the Tracking Lab experiments dealing with the perception of space and the navigation achievements are carried out. Experimental subjects are allowed to move around freely in virtual worlds. Virtual surroundings are projected using so-called "Helmet Mounted Displays" (HMDs). The Robo Lab houses the first motion simulator worldwide based on a standard industrial robot, modified for perception experiments. With far more maneuvering room than customary simulators, the movement simulator allows a more detailed investigation of the influence of our sense of equilibrium on the perception of movement.

Why are we conducting our research in a Virtual Reality?
Research in a virtual reality (VR) makes it easier for us to maintain controllable and reproducible test surroundings. Real surroundings look quite different depending on the weather or the time of day. In VR all these conditions can be kept constant for an experiment. Every experimental subject sees precisely the same space or scene. Nevertheless, these conditions can also be specifically modified, should it be important for the experiment. Sometimes even experiments are carried out, which would not be possible in the real world or only with large efforts at one single place.
 
Here you can find some videos of our research in VR:
Opens external link in new windowhttp://www.youtube.com/user/MPIVideosProject

CableRobot Simulator

CableRobot Simulator
The in-house developed novel CableRobot Simulator is capable of generating unique motion trajectories using - as its name suggests - a cable suspension system. This prototype will be used in perception and cognition research.
The robot is suspended on eight cables - each cable is driven by a powerful motor. In total, these motors hold the power of remarkable 473 PS. The strained simulator cabin consists of carbon fiber rods, which were made especially for this purpose. They can be arbitrarily controlled to either launch the cabin into a wild roller coaster ride using the entire space of 5 x 8 x 5 m³, or perform movements that cannot even be noticed by the passenger. Its large workspace and dynamic capabilities make the simulator suitable for a wide spectrum of VR applications, including driving/flight simulation as well as investigation of basic perception processes in humans.

CyberMotion Simulator

MPI CyberMotion Simulator with cabin
open cabin
Sustained accelerations are usually simulated by use of motion cueing algorithms that involve washout filters. Using these algorithms, a fraction of the gravity vector generates the sensation of acceleration by an unperceived backward tilt of the cabin. A different solution to simulate sustained accelerations involves centrifugal force. If a subject is rotated continuously, the canal system adapts and the centrifugal force generates the perception of an ongoing acceleration. Both solutions require the subject to be seated in a closed cabin because the visual system would otherwise destroy the illusion.

Together with a local company (BEC, Reutlingen) the MPI developed a closed cabin for the MPS Cyber Motion Simulator. The cabin is equipped with a stereo projection and mounting possibilities for force feedback haptic devices like the Sensodrive steering wheel and the Wittenstein controls used for helicopter and flight simulation. To achieve continuous rotation around the base axis, the robot arm was modified by the manufacturer (KUKA robot AG, Germany). The robot was equipped with a different transmission and further mechanical modifications. Motor power and electrical control signals are transmitted to the robot by slip rings, an outer slip ring for power lines and an inner slip ring for high frequency signals. In its standard configuration, the cabin is attached to the robot flange from behind. In this configuration the subject faces outwards and it is possible to simulate constant deceleration by rotating the robot around its base axis. The 6 axes of the robot do not allow the possibility to place the subject facing inwards towards the base of the robot, so as to simulate constant acceleration. In order to achieve this possibility, the cabin was equipped with an actuated seventh axis. The C-shaped flange, along which the cabin axis can slide, provides the possibility to steer the robot into a position in which the cabin is attached to the robot flange from below. In this configuration, turning the last robot axis allows placing the subject towards the center of rotation that in turn grants the possibility to simulate a constant acceleration. The robot is equipped with its 6 axis controller and the 7th axis is equipped with a self developed separate controller. To achieve synchronized operation of the full 7 axis system, a combined control system was developed. This control system also monitors and supervises all safety devices and offers the possibility for manual and automated control of the MPI CyberMotion Simulator.
 

Stewart Motion Platform (MotionLab)

Reconstruction of the MotionLab
Foto: GEHIRN&GEIST/Manfred Zentsch
In the MotionLab we study the interplay between visual, auditory, vestibular and neuromuscular system. The center piece of the MotionLab is a hexapod Stewart platform (Cuesim) with six degrees of freedom. Mounted on the platform is a cabin with two interchangeable screens, a flat screen and a curved screen, both with a field of view of 86°×63°. The projector has a resolution of 1400×1050 with a refresh rate of 60 Hertz. Beneath the seat and foot plate are subwoofers which can be used to simulate high frequency vibrations in driving and flight simulators and mask the vibrations caused by the platform electric motors. The MotionLab was designed as a distributed system that is driven by multiple computers. Software control is based on an in-house development (xDevL) that can be used both with C++ and VirtoolsTM programs. The cabin has been equipped with 5 OptiTrack cameras (Natural Point Inc) that allow for tracking and capturing the operator’s body or limb motion.
 
The characteristics of the Stewart platform have been objectively measured with a standardized approach. It was found that the dynamic response of the platform is determined by the platform filters implemented by the manufacturer, the system time delay, and the noise characteristics of the actuators. These characteristics have been modeled and simulated on the SIMONA Research Simulator at Delft University of Technology. Experiments on the influence of these characteristics on human control behavior have shown that the limitations of the platform filters cause humans to rely predominantly on visual cues in a closed-loop target-following disturbance-rejection control task.
 
Furthermore, the MotionLab was used to study the effects of motion disturbances on the use of touch interfaces. When using a touch interface in a moving vehicle, the vehicle motion may induce involuntary limb motions which decrease manual control precision. The interference of motion disturbances on the use of touch interfaces was experimentally quantified. Participants performed a simple reach-and-touch task to interact with an iPad touch interface (Apple Inc) while being subjected to lateral and vertical motion disturbances. During the experiment, the motion disturbance, touch location, and trajectory of the hand were recorded. The goal of the study was to gain insight in the relationship between motion disturbances and touch errors.
 
The MotionLab is currently being upgraded with a state-of-the-art hexapod (Bosch Rexroth BV). Amongst other features, this hexapod has superior actuator technology, resulting in higher platform stiffness and increased motion smoothness.

Fixed-based flight simulator (HeliLab)

Multi-Panel Display
Schematic Overview of the System
Heli-Lab is a fixed-based flight simulator that affords a large field of view (i.e., 105°x100°). It is equipped to measure explicit and implicit behavioral responses — respectively, control stick inputs as well as eye-tracking and physiological measures. Thus, we are able to study the relationship between a pilot's actions and his cognitive workload during flight maneuvers.
 
The core system is an open-source flight simulator (FlightGear, www.flightgear.org) that accepts control inputs that are processed by a designated aircraft model to compute the appropriate world position and orientation of a modelled aircraft. Subsequently, these values are used to render the corresponding display of the world scene as seen from the cockpit, via a computing cluster for 10 wide-screen monitors.

Our system is equipped to record implicit behavioral responses. A remote eyetracking system (2 stereo-heads, 60 Hz; Facelab, Seeing Machines, USA) monitors the pilot's line-of-sight in the world scene as well as gaze on the heads-down instrument panel. Physiological measurements of the pilot are also recorded in tandem using a 16-channel active electrode system (g.Tec Medical Engineering GmbH, Austria). This system can be used to monitor the pilot's galvanic skin response, heart-rate variability and electro-encephalographic signals.

There are two control systems for the flight simulator that both feature generic helicopter controls such as a cyclic stick, a collective stick, and pedals. One system is unactuated and serves as any common joystick, while the other system consists of motorized controls (Wittenstein AG, Germany). This actuated system can be configured to resemble a wide range of control system dynamics, and can provide haptic feedback cues to the pilot. These cues can be used to support the pilot’s situational awareness.

The image here shows a schematic overview of the system (left). The information received by the user (red) and his control as well as physiological responses (blue) constitute part of this closed-loop system. A photo of the simulator in use (right) shows a participant performing a closed-loop control task (i.e., landing approach) while gaze is measured in real-time (inset).

Control Loading Lab

In the Control Loading Lab, we perform experimental evaluation to understand human behaviour in manual control tasks and to investigate novel approaches for human-machine interfaces. For this purpose, we use a fixed-base simulator with a control loaded sidestick, cyclic, collective and pedals from Wittenstein GmbH, Germany. These devices can simulate highly accurate control dynamics over a large frequency range and can be used to provide haptic feedback cues to the participant. The input devices are combined with a VIEWPixx display from VPixx Technologies, Canada, which can present stimuli at 120 Hz with accurate timing characteristics. Therefore, this lab provides an optimal environment for human-in-the-loop experiments.

Quarter-sphere Large Screen Projection (PanoLab)

We have initially employed a large screen, half-cylindrical virtual reality projection system to study human perception since 1997. Studies in a variety of areas have been carried out, including spatial cognition and the perceptual control of action. In 2005, we made a number of fundamental improvements to the virtual reality system. Perhaps the most noticeable change is an alteration of the screen size and geometry. This includes extending the screen horizontally (from 180 to 230 degrees) and adding a floor screen and projector. It is important to note that the projection screen curves smoothly from the wall projection to the floor projection, resulting in an overall screen geometry that can be described as a quarter- sphere. Vertically, the screen subtends 125 degrees (25 degree of visual angle upwards and 100 degrees downwards from the normal observation position).

In 2011 the image generation and projection setup was significantly updated. The existing four JVC SX21 DILA projectors (1400x1050) and curved mirrors were replaced with six EYEVIS LED DLP projectors (1920x1200), thereby simplifying the projection setup and increasing the overall resolution. In order to compensate for the visual distortions caused by the curved projection screen, as well as to achieve soft-edge blending for seamless overlap areas, we have developed a flexible warping solution using the new warp and blend features of the NVIDIA Quadro chipsets. This solution gives us the flexibility of a hardware-based warping solution and the accuracy of a software-based warping. The necessary calibration data for the image warping and blending stages is generated by a new camera-based projector auto-calibration system (DOMEPROJECTION.COM). Image generation is handled by a new high-end render cluster consisting of six client image generation PCs and one master PC. To avoid tearing artifacts resulting from the multi-projector setup, the rendering computers use frame-synchronized graphics cards to synchronize the projected images.

In addition to improving the visual aspects of the system, we increased the quality, number, and type of input devices. Participants in the experiments can, for example, interact with the virtual environment via actuated Wittenstein helicopter controls, joysticks, a space mouse, steering wheels, a Go-Kart, or a virtual bicycle (VRBike). Furthermore a Razer Hydra 6DOF joystick can be used for wand navigation and small volume tracking. Some of the input devices offer the possibility of force-feedback. With the VRBike, for example, one can actively pedal and steer through the virtual environment, and the virtual inertia and incline will be reflected in the pedals' resistance.

Back-Projection Large Screen Display (BackproLab)

The quarter-sphere projection setup is complimented by a back-projection setup, which has the advantage that participants do not create shadows on the screen. This setup consists of a single SXGA+ projector (Christie Mirage S+3K DLP) and a large, flat screen (2.2m wide by 2m high). The projector has a high contrast ration of 1500:1 and can be used for mono or active stereo projections. This space previously used four Vicon V-series cameras for motion tracking. These cameras were replaced in 2014 with a SMARTTRACK system from Advanced Realtime Tracking (ART), which can track up to four rigid-body objects and can directly output the calculated positions and orientations via UDP network stream.


For stereo projection setup, the NVIDIA 3DVision Pro active shutter-glasses are used. These glasses use RF technology for synchronization and therefore can also be used in conjunction with the infrared-based optical tracking system. The glasses have been modified with markers for the optical tracking system and thus can be used for head tracking.

Large Tracking Hall (TrackingLab)

TrackingLab Foto: GEHIRN&GEIST/Manfred Zentsch
TrackingLab Foto: GEHIRN&GEIST/Manfred Zentsch
The free space walking and tracking laboratory in the Cyberneum is a large (12.7m x 11.9m x 6.9m) empty space equipped with 26 high-speed motion capture cameras from Vicon. In December 2014 the existing setup of 16 Vicon MX13 cameras (1.2 Megapixels) was expanded with 10 Vicon T160 cameras (16 Megapixels) to double the tracking volume for aerial robotics research and to improve head tracking quality for immersive VR research. This tracking system allows us to capture the motions of one or more persons by processing the images of configurations of multiple infrared reflective markers in real-time. The calculated position and orientation information can be transmitted wirelessly to a high-end mobile graphics system that updates the simulated virtual environment according to the person's position and is able to generate a correct egocentric visualization and/or auditory simulation.  Participants can navigate freely within the entire area of the tracking hall by either wearing the backpack or having the experimenter wear the backpack and follow them around in the walking room. In order to suppress any interference between real and simulated environment as far as possible, the laboratory is completely dark (black with the ability to block out all light) and acoustic panels around the walls largely reduce acoustic reverberations. The tracking setup also allows for tracking multiple objects such as flying quadcopters as well as for full-body motion capture (e.g. for analysis of sports performance, i.e. gymnastics, or for animation of virtual characters).

Face and Motion Capture Facilities

For recordings of faces and human activities in general we have two facilities: the VideoLab was designed for recordings of human activities from several viewpoints. 6 cameras can record precisely synchronized color videos. It has also been used to create a database of facial expressions and action units and is being employed on recognition of facial expressions as well as in investigations of visual-auditory interactions in communication.
Several commercial 3D scanning systems are used in the ScanLab for capturing shape and color information of faces.
The data is used for producing stimuli for psychophysical experiments and for building models to support machine learning and computer vision algorithms.

VideoLab

The VideoLab has been in use since May 2002. It was designed for recordings of human activities from several viewpoints. It currently consists of 5 Basler CCD cameras that can record precisely synchronized color videos. In addition, the system is equipped for synchronized audio recording. The whole system was built from off-the-shelf components and was designed for maximum flexibility and extendibility. The VideoLab has been used to create several databases of facial expressions as well as  action units and was instrumental to several projects on recognition of facial expressions and gestures across viewpoints, multi-modal action learning and recognition, as well as in investigations of visual-auditory interactions in communication.
One of the major challenges in creating the VideoLab consisted of choosing hardware components allowing for on-line, synchronized recording of uncompressed video streams. Each of the 6 Basler A302 bc digital video cameras produces up to 26 MB per second (782 × 582 pixels at 60 frames per second) – date that is continuously written onto the hard disk. Currently, the computers have a striped RAID-0 configuration with a total disk capacity of 200 GB each to maximize writing speed. The computers are connected and controlled via a standard Ethernet- LAN.
One of the major challenges in creating the VideoLab was the choice of hardware components allowing for on-line, synchronized recording of uncompressed video streams. Each of the 5 Basler A302bc digital video cameras produces up to 26MB per second (782x582 pixels at 60 frames per second) – data that is continuously written onto dedicated hard disks using CameraLink interfaces. Currently, the computers have a striped RAID-0 configuration to maximize writing speed. The computers are connected and controlled via a standard Ethernet-LAN, synchronization at the microsecond level is achieved using external hardware triggering.
For precise control of multi-camera recordings, we developed our own distributed recording software. In addition to the frame grabber drivers, which provide basic recording functionality on a customized Linux operating system, we programmed a collection of distributed, multi-threaded real-time C programs that handle control of the hardware, as well as buffering and write-out of the video- and audio data to hard discs. All software components communicate with each other via standard Ethernet-LAN. On top of this low-level control software, we have implemented a graphical user interface to access the whole functionality of the VideoLab using Matlab and the open-source PsychToolBox-3 software as framework
This image shows an example recording of the VideoLab (Facial expression recorded from 6 synchronized viewpoints)

ScanLab

Left: ABW scanner setup. Right: Sample data of the ABW structured light scanner

Several commercial 3D scanning systems are used for capturing shape and color information of faces. The data is used for producing stimuli for psychophysical experiments and for building models to support machine learning and computer vision algorithms:

Cyberware Head Scanner
This scanner (Cyberware, Inc., USA), uses laser triangulation for recording 3D data and a line sensor for capturing color information, both producing 512 x 512 data points (typical depth resolution 0.1 mm), covering 360º of the head in a cylindrical projection within 20 s. It was extensively used to build the MPI Head Database (http://faces.kyb.tuebingen.mpg.de/).

ABW Structured Light Scanner
This is a customized version of an industrial scanning system (ABW GmbH, Germany), modified for use as a dedicated face scanner. It consists of two LCD line projectors, three video cameras and three DSLR cameras. Using structured light and a calibrated camera/projection setup, 3D data can be calculated from the video images using triangulation. Covering a face from ear to ear, the system produces up to 900.000 3D points and 18 megapixels of color information. One recording takes about 2 s, thus making this scanner much more suitable for recording facial expressions. It was used extensively for building a facial expression model and for collecting static FACS data.

ABW Dynamic Scanner
Using the same principle of structured light, this system uses a high-speed stripe pattern projector, two high-speed video cameras and a color camera synchronized to strobe illumination. It can currently perform 40 3D measurements/s (with color information), producing detailed face scans over time. Ten seconds of recording time produce 2 GB of raw data.

3dMD Speckle Patter Scanner
This turn-key system (3dMD Ltd, UK) is mainly designed for medical purposes. Using four video cameras in combination with infrared speckle pattern flashes and two color cameras in sync with photography flashes, it can capture a face from ear to ear in 2 ms, making it highly suitable for recording infants and children. This system is used in collaboration with the University Clinic for Dentistry and Oral Medicine for studying infant growth.

Passive 4D Stereo Scanner
With three synchronized HD machine vision cameras (two grayscale, one color), this system (Dimensional Imaging, UK) is capable of reconstructing high-quality dense stereo data of moving faces at a rate of 25 frames/s. Since it is a passive system, high-quality studio lighting can be used to illuminate the subject which results in better color data as well as more subject comfort, compared to the ABW Dynamic Scanner. While care must be taken with focus, exposure and calibration of the cameras, the system imposes fewer limitations on rigid head motion.

 
Last updated: Wednesday, 01.06.2016