Laboreinrichtungen

Virtual Reality Laboreinrichtungen

Foto: Gehirn&Geist/Manfred Zentsch
In der Abteilung Wahrnehmung, Kognition und Handlung untersuchen wir mit Hilfe von Methoden aus der Virtuellen Realität (VR) die menschliche Wahrnehmung. Dies ermöglicht uns, Experimente in einer kontrollierten, aber dennoch natürlichen Umgebung durchzuführen. Hierzu haben wir spezielle Hardware und experimentelle Aufbauten, die nach unseren Vorgaben entwickelt und gebaut wurden, sowie die dazugehörige Software, wie Programmbibliotheken und Datenbanken.
Um jüngsten Entwicklungen auf dem Gebiet der VR-Technologie Rechnung zu tragen und die dadurch entstehenden Möglichkeiten optimal auszuschöpfen, entstand in den Jahren 2004 und 2005 das Öffnet einen externen Link in einem neuen FensterCyberneum. Hier sind die zentralen Fragestellungen das Zusammenspiel verschiedener Sinne, die Bedeutung der räumlichen Umwelt für Verhalten und die Wechselwirkung von Wahrnehmung und Handlung. Zwei separate Hallen, das TrackingLab und das RoboLab, bilden mit einer Größe von jeweils 15x12m den Hauptbereich des Forschungsgebäudes. Im TrackingLab werden Experimente zu Raumwahrnehmung und Navigationsleistungen durchgeführt, in denen sich Versuchspersonen frei und natürlich in virtuellen Welten bewegen. Die Projektion der virtuellen Umgebung erfolgt dabei mittels so genannter "Helmet Mounted Displays" (HMDs). Im RoboLab befindet sich der weltweit erste Bewegungssimulator auf Basis eines Standard-Industrieroboters, der eigens für Wahrnehmungsexperimente modifiziert wurde. Mit einem weitaus größeren Bewegungsspielraum als alle herkömmlichen Simulatoren ermöglicht er die detaillierte Untersuchung des Einflusses unseres Gleichgewichtssystems auf die Wahrnehmung von Bewegung.

Wieso forschen wir in der Virtuellen Realität?
Die Forschung in VR macht es uns einfacher  kontrollierbare und reproduzierbare Versuchsumgebungen zu schaffen. Echte Umgebungen sehen je nach Wetter oder Tageszeit ganz unterschiedlich aus. In VR können all diese Eigenschaften für ein Experiment konstant gehalten werden. Jede Versuchsperson sieht also exakt den gleichen Raum oder Szene. Falls es für das Experiment wichtig ist, können diese Eigenschaften jedoch auch ganz bewusst kontrolliert verändert werden. Manchmal werden sogar Experimente durchgeführt, die in der Realität überhaupt nicht möglich wären oder die nur mit grossem Aufwand an einem einzigen Ort zu finden wären.
 
Hier finden Sie einige Videos zu unserer Forschung in VR:
Öffnet einen externen Link in einem neuen Fensterhttp://www.youtube.com/user/MPIVideosProject

Der CableRobot-Simulator

CableRobot-Simulator
Der hier am Institut entwickelte neuartige CableRobot-Simulator, ein an Seilen aufgehängter Bewegungssimulator, ermöglicht flexible und hochrealistische Bewegungsabläufe. Der Prototyp wird zukünftig in der Wahrnehmungs- und Kognitionsforschung eingesetzt.
An acht Drahtseilen ist der Roboter in einer Halle aufgehängt – jedes Seil wird von einem starken Motor angetrieben. Insgesamt kommen dabei beachtliche 473 PS zusammen. Weltweit gibt es derzeit keine vergleichbaren Aufbauten: Die verspannte Gondel besteht aus Kohlefaserstäben, die extra für diesen Zweck angefertigt wurden. Die Gondel kann beliebig gesteuert werden: Zu einer wilden Achterbahnfahrt ansetzen und den kompletten Raum von 5 x 8 x 5 m³ ausnutzen, oder aber Bewegungen vollführen, die der Passagier überhaupt nicht bemerkt. 
Durch seinen Aufbau bietet der Seilroboter ein breites Anwendungsspektrum: von hochdynamischen Bewegungen bei Rennsimulationen oder Helikopterflügen bis hin zu Bewegungen an der menschlichen Wahrnehmungsschwelle wie sie beispielsweise für wissenschaftliche Experimente eingesetzt werden. Durch die komplizierte Seilkonstruktion kann er sich dabei an beliebige Punkte im Raum bewegen und damit Bewegungsabläufe – wie bei einer Autofahrt etwa – realistisch nachahmen.

CyberMotion Simulator

MPI CyberMotion Simulator with cabin
open cabin
Sustained accelerations are usually simulated by use of motion cueing algorithms that involve washout filters. Using these algorithms, a fraction of the gravity vector generates the sensation of acceleration by an unperceived backward tilt of the cabin. A different solution to simulate sustained accelerations involves centrifugal force. If a subject is rotated continuously, the canal system adapts and the centrifugal force generates the perception of an ongoing acceleration. Both solutions require the subject to be seated in a closed cabin because the visual system would otherwise destroy the illusion.
 
Together with a local company (BEC, Reutlingen) the MPI developed a closed cabin for the MPS Cyber Motion Simulator. The cabin is equipped with a stereo projection and mounting possibilities for force feedback haptic devices like the Sensodrive steering wheel and the Wittenstein controls used for helicopter and flight simulation. To achieve continuous rotation around the base axis, the robot arm was modified by the manufacturer (KUKA robot AG, Germany). The robot was equipped with a different transmission and further mechanical modifications. Motor power and electrical control signals are transmitted to the robot by slip rings, an outer slip ring for power lines and an inner slip ring for high frequency signals. In its standard configuration, the cabin is attached to the robot flange from behind. In this configuration the subject faces outwards and it is possible to simulate constant deceleration by rotating the robot around its base axis. The 6 axes of the robot do not allow the possibility to place the subject facing inwards towards the base of the robot, so as to simulate constant acceleration. In order to achieve this possibility, the cabin was equipped with an actuated seventh axis. The C-shaped flange, along which the cabin axis can slide, provides the possibility to steer the robot into a position in which the cabin is attached to the robot flange from below. In this configuration, turning the last robot axis allows placing the subject towards the center of rotation that in turn grants the possibility to simulate a constant acceleration. The robot is equipped with its 6 axis controller and the 7th axis is equipped with a self developed separate controller. To achieve synchronized operation of the full 7 axis system, a combined control system was developed. This control system also monitors and supervises all safety devices and offers the possibility for manual and automated control of the MPI CyberMotion Simulator.

MotionLab

Reconstruction of the MotionLab
Foto: GEHIRN&GEIST/Manfred Zentsch

In the MotionLab we study the integration of cues from the visual, auditory, vestibular and somatosensory system. The center piece of the MotionLab is a hexapod Stewart platform (Cuesim) with six degrees of freedom. Mounted on the platform is a cabin with two interchangeable screens, a flat screen and a curved screen, both with a field of view of 86°×63°. The projector has a resolution of 1400×1050 with a refresh rate of 60 Hertz. Beneath the seat and foot plate are subwoofers which can be used to simulate high frequency vibrations in driving and flight simulators and mask the vibrations caused by the platform electric motors. The MotionLab was designed as a distributed system that is driven by multiple computers. Software control is based on an in-house development (xDevL) that can be used both with C++ and Virtools ™ programs.

The characteristics of the Stewart platform have been objectively measured with a standardized approach. It was found that the dynamic response of the platform is determined by the platform filters implemented by the manufacturer, the system time delay, and the noise characteristics of the actuators. These characteristics have been modeled and simulated on the SIMONA Research Simulator at Delft University of Technology. Experiments on the influence of these characteristics on human control behavior have shown that the limitations of the platform filters cause humans to rely predominantly on visual cues in a closed-loop target-following disturbance-rejection control task.

Multi-Panel Display

Multi-Panel Display
Schematic Overview of the System

Heli-Lab is a fixed-based flight simulator that affords a large field of view (i.e., 105°x100°). It is equipped to measure explicit and implicit behavioral responses — respectively, control stick inputs as well as eye-tracking and physiological measures. Thus, we are able to study the relationship between a pilot's actions and his cognitive workload during flight maneuvers.

 

The core system is an open-source flight simulator (FlightGear, www.flightgear.org) that accepts control inputs that are processed by a designated aircraft model to compute the appropriate world position and orientation of a modelled aircraft. Subsequently, these values are used to render the corresponding display of the world scene as seen from the cockpit, via a computing cluster for 10 wide-screen monitors.

Our system is equipped to record implicit behavioral responses. A remote eyetracking system (2 stereo-heads, 60 Hz; Facelab, Seeing Machines, USA) monitors the pilot's line-of-sight in the world scene as well as gaze on the heads-down instrument panel. Physiological measurements of the pilot are also recorded in tandem using a 16-channel active electrode system (g.Tec Medical Engineering GmbH, Austria). This system can be used to monitor the pilot's galvanic skin response, heart-rate variability and electro-encephalographic signals.


There are two control systems for the flight simulator that both feature generic helicopter controls such as a cyclic stick, a collective stick, and pedals. One system is unactuated and serves as any common joystick, while the other system consists of motorized controls (Wittenstein AG, Germany). This actuated system can be configured to resemble a wide range of control system dynamics, and can provide haptic feedback cues to the pilot. These cues can be used to support the pilot’s situational awareness.

The image here shows a schematic overview of the system (left). The information received by the user (red) and his control as well as physiological responses (blue) constitute part of this closed-loop system. A photo of the simulator in use (right) shows a participant performing a closed-loop control task (i.e., landing approach) while gaze is measured in real-time (inset).

 

Control Loading Lab

In the Control Loading Lab, we perform experimental evaluation to understand human behaviour in manual control tasks and to investigate novel approaches for human-machine interfaces. For this purpose, we use a fixed-base simulator with a control loaded sidestick, cyclic, collective and pedals from Wittenstein GmbH, Germany. These devices can simulate highly accurate control dynamics over a large frequency range and can be used to provide haptic feedback cues to the participant. The input devices are combined with a VIEWPixx display from VPixx Technologies, Canada, which can present stimuli at 120 Hz with accurate timing characteristics. Therefore, this lab provides an optimal environment for human-in-the-loop experiments.

Semi-spherical Display

We have employed a large screen, half-cylindrical virtual reality projection system to study human perception since 1997. Studies in a variety of areas have been carried out, including spatial cognition and the perceptual control of action. In 2005, we made a number of fundamental improvements to the virtual reality system. Perhaps the most noticeable change is an alteration of the screen size and geometry. This includes extending the screen horizontally (from 180 to 230 degrees) and adding a floor screen and projector. It is important to note that the projection screen curves smoothly from the wall projection to the floor projection, resulting in an overall screen geometry that can be described as a quarter- sphere. Vertically, the screen subtends 125 degrees (25 degree of visual angle upwards and 100 degrees downwards from the normal observation position).

In 2011 the image generation and projection setup was significantly updated. The existing four JVC SX21 DILA projectors (1400x1050) and curved mirrors were replaced with six EYEVIS LED DLP projectors (1920x1200), thereby simplifying the projection setup and increasing the overall resolution. In order to compensate for the visual distortions caused by the curved projection screen, as well as to achieve soft-edge blending for seamless overlap areas, we have developed a flexible warping solution using the new warp and blend features of the NVIDIA Quadro chipsets. This solution gives us the flexibility of a hardware-based warping solution and the accuracy of a software-based warping. The necessary calibration data for the image warping and blending stages is generated by a new camera-based projector auto-calibration system (DOMEPROJECTION.COM), Image generation is handled by a new high-end render cluster consisting of six client image generation PCs and one master PC. To avoid tearing artifacts resulting from the multi-projector setup, the rendering computers use frame-synchronized graphics cards to synchronize the projected images.

In addition to improving the visual aspects of the system, we increased the quality, number, and type of input devices. Participants in the experiments can, for example, interact with the virtual environment via joysticks, a space mouse, steering wheels, a Go-Kart, or a virtual bicycle (VRBike). Most of the input devices offer the possibility of force-feedback. With the VRBike, for example, one can actively pedal and steer through the virtual environment, and the virtual inertia and incline will be reflected in the pedals' resistance.

Stereo Back Projection Display

The quarter-sphere projection setup is complimented by a back-projection setup, which has the advantage that participants do not create shadows on the screen. This setup consists of a single SXGA+ projector (Christie Mirage S+3K DLP) and a large, flat screen (2.2m wide by 2m high). The projector has a high contrast ration of 1500:1 and can be used for mono or active stereo projections. This space has four VICON® V-series cameras for motion tracking, which has been used in recent studies investigating the influence of motion-parallax and stereo cues for depth and size perception.
For stereo projection setup, the NVIDIA 3DVision Pro active shutter-glasses are used. These glasses use RF technology for synchronization and therefore can also be used in conjunction with the infrared-based VICON® tracking system. The glasses have been modified with markers for the optical tracking system and thus can be used for head tracking.

Grosse Trackinghalle

TrackingLab Foto: GEHIRN&GEIST/Manfred Zentsch
TrackingLab Foto: GEHIRN&GEIST/Manfred Zentsch
The free space walking laboratory in the Cyberneum is a large (12.7 m x 11.9 m x 6.9 m) empty space equipped with 16 highspeed motion capture cameras (Vicon® MX 13). This tracking system allows us to capture the motions of one or more persons by processing the images of configurations of multiple infra-red reflective markers in real-time.

The position signals are transmitted wirelessly to a high-end mobile graphics system which updates the simulated environment according to the person’s position and is able to generate a correct egocentric visualization and/or auditory simulation. For the display device we currently have three head-mounted displays (HMDs) which offer us different levels of visual resolution, weights and field of view. First, we have four light-weight emagin z800 3DVisor HMDs which offer a geometric field of view of approximately 32 x 24 degrees at a resolution of 2 x 800 x 600 pixels. They each weigh 0.25 kg and are specifically useful for multi-user experiments where it is desirable to control the visual input for multiple participants. Second, we have the nVisor HMD which has a field of view of approximately 47 x 38 degrees with a resolution of 2 x 1280 x 1024 and weighs approximately 1kg. This HMD has greater resolution but weighs more and therefore the experimenter usually wears the backpack which holds the graphics/laptop for the HMD. Finally, we have a Kaiser SR80 ProView HMD which has a field of view of 53 x 63 degrees with a resolution of 1280 x 1024 and weighs 0.79 g. This final HMD has high quality visual resolution and field of view and is quite light-weight; however it does not have battery power and so is not currently used for mobile experiments. Instead we currently use the Kaiser HMD for experiments where the participant is always in the same location (e.g. seated or walking on a treadmill). For the former HMDs (eMagin and nVisor, all further technical components (i.e. laptop, video signal splitter controller, power supply) are mounted on a backpack. Subjects can therefore navigate freely within the entire area of the lab by either wearing the backpack or having the experimenter wear the backpack and follow them around in the walking room. In order to suppress any interference between real and simulated environment as far as possible, the laboratory is completely dark (black with the ability to block out all light) and acoustic panels around the walls largely reduce acoustic reverberations.

Face and Motion Capture Facilities

For recordings of faces and human activities in general we have two facilities: the VideoLab was designed for recordings of human activities from several viewpoints. 6 cameras can record precisely synchronized color videos. It has also been used to create a database of facial expressions and action units and is being employed on recognition of facial expressions as well as in investigations of visual-auditory interactions in communication.
Several commercial 3D scanning systems are used in the ScanLab for capturing shape and color information of faces.
The data is used for producing stimuli for psychophysical experiments and for building models to support machine learning and computer vision algorithms.

Das VideoLab

Das VideoLab, das seit Mai 2002 in Benutzung ist, wurde für die Aufnahme menschlicher Bewegungen aus mehreren Ansichten entwickelt. It currently consists of 6 Basler CCD cameras that can record precisely synchronized color videos. In addition, the system is equipped for multi-angle audio recording that is synchronized to the video data. The whole system was built from off-the-shelf components and was designed for maximum flexibility and extendibility. The VideoLab has been used to create a database of facial expressions and action units (http://vdb.kyb.tuebingen.mpg.de/) and is being employed in several projects on recognition of facial expressions and gestures across viewpoints, multi-modal action learning and recognition, as well as in investigations of visual-auditory interactions in communication.

One of the major challenges in creating the VideoLab consisted of choosing hardware components allowing for on-line, synchronized recording of uncompressed video streams. Each of the 6 Basler A302 bc digital video cameras produces up to 26 MB per second (782 × 582 pixels at 60 frames per second) – date that is continuously written onto the hard disk. Currently, the computers have a striped RAID-0 configuration with a total disk capacity of 200 GB each to maximize writing speed. The computers are connected and controlled via a standard Ethernet- LAN. As there was no commercial software available for control of multi-camera recording setups, we had to develop our own distributed recording software. In addition to the frame grabber drivers, which provide basic recording functionality on a customized Linux operating system, we programmed a collection of distributed, multi-threaded real-time C programs that handle control of the hardware, as well as buffering and write-out of the video- and audio data to hard discs. All software components communicate with each other via standard Ethernet-LAN. On top of this control software, we have implemented a graphical user interface with which one can access the whole functionality of the VideoLab.
This image shows an example recording of the VideoLab (Facial expression recorded from 6 synchronized viewpoints)

ScanLab

Left: ABW scanner setup. Right: Sample data of the ABW structured light scanner

Several commercial 3D scanning systems are used for capturing shape and color information of faces. The data is used for producing stimuli for psychophysical experiments and for building models to support machine learning and computer vision algorithms:

Cyberware Head Scanner
This scanner (Cyberware, Inc., USA), uses laser triangulation for recording 3D data and a line sensor for capturing color information, both producing 512 x 512 data points (typical depth resolution 0.1 mm), covering 360º of the head in a cylindrical projection within 20 s. It was extensively used to build the MPI Head Database (Öffnet einen externen Link in einem neuen Fensterhttp://faces.kyb.tuebingen.mpg.de/).

ABW Structured Light Scanner
This is a customized version of an industrial scanning system (ABW GmbH, Germany), modified for use as a dedicated face scanner. It consists of two LCD line projectors, three video cameras and three DSLR cameras. Using structured light and a calibrated camera/projection setup, 3D data can be calculated from the video images using triangulation. Covering a face from ear to ear, the system produces up to 900.000 3D points and 18 megapixels of color information. One recording takes about 2 s, thus making this scanner much more suitable for recording facial expressions. It was used extensively for building a facial expression model and for collecting static FACS data.

ABW Dynamic Scanner
Using the same principle of structured light, this system uses a high-speed stripe pattern projector, two high-speed video cameras and a color camera synchronized to strobe illumination. It can currently perform 40 3D measurements/s (with color information), producing detailed face scans over time. Ten seconds of recording time produce 2 GB of raw data.

3dMD Speckle Patter Scanner
This turn-key system (3dMD Ltd, UK) is mainly designed for medical purposes. Using four video cameras in combination with infrared speckle pattern flashes and two color cameras in sync with photography flashes, it can capture a face from ear to ear in 2 ms, making it highly suitable for recording infants and children. This system is used in collaboration with the University Clinic for Dentistry and Oral Medicine for studying infant growth.

Passive 4D Stereo Scanner
With three synchronized HD machine vision cameras (two grayscale, one color), this system (Dimensional Imaging, UK) is capable of reconstructing high-quality dense stereo data of moving faces at a rate of 25 frames/s. Since it is a passive system, high-quality studio lighting can be used to illuminate the subject which results in better color data as well as more subject comfort, compared to the ABW Dynamic Scanner. While care must be taken with focus, exposure and calibration of the cameras, the system imposes fewer limitations on rigid head motion.

Last updated: Mittwoch, 01.06.2016