Virtual Reality: Facilities


Virtual Reality: Facilities

We study the human perception with the help of virtual reality (VR). This enables us to conduct our experiments in controlled and yet natural surroundings. For this we have special hardware and experimental constructions, which have been built by our guidelines as well as the corresponding software such as program libraries and databases.

Taking in account the most recent development in the field of VR and the arising opportunities the Cyberneum was built in 2004 - 2005. The research focuses on the interaction of different senses, the impact of the spatial environment on behavior and the interaction of perception and action. Two separate halls, the Tracking Lab and the Robo Lab, each 15 x 12 meters large, shape the main area of the research building. In the Tracking Lab experiments dealing with the perception of space and the navigation achievements are carried out. Experimental subjects are allowed to move around freely in virtual worlds. Virtual surroundings are projected using so-called "Helmet Mounted Displays" (HMDs). The Robo Lab houses the first motion simulator worldwide based on a standard industrial robot, modified for perception experiments. With far more maneuvering room than customary simulators, the movement simulator allows a more detailed investigation of the influence of our sense of equilibrium on the perception of movement.


Why are we conducting our research in a Virtual Reality?

Research in a virtual reality (VR) makes it easier for us to maintain controllable and reproducible test surroundings. Real surroundings look quite different depending on the weather or the time of day. In VR all these conditions can be kept constant for an experiment. Every experimental subject sees precisely the same space or scene. Nevertheless, these conditions can also be specifically modified, should it be important for the experiment. Sometimes even experiments are carried out, which would not be possible in the real world or only with large efforts at one single place.


CableRobot Simulator

The CableRobot simulator provides a novel approach to the design of motion simulation platforms in as far as it uses cables and winches for actuation instead of rigid links known from hexapod simulators. This approach allows reducing the actuated mass, scale up the workspace significantly, and provides great flexibility to switch between system configurations in which the robot can be operated. The simulator is used for studies in the field of human perception research, flight simulation, and virtual reality applications.

Motion simulators usually are based on parallel kinematics such as hexapod platforms as used in the CyberPod Motion Lab, or more recently, serial kinematic robots as used for the CyberMotion simulator.

The CableRobot simulator uses a parallel kinematics architecture where the usually rigid links are replaced by winch driven cables. While some research institutes work in the field of cable-driven parallel robots, none so far has developed a system for the safe transport of passengers and with the focus on high performance motion simulation.

The use of cable robots for motion simulation allows to build simulators with an extraordinary power-weight ratio of approximately 2.6 kW/kg with respect to the provided large operational space. The simulator provides a maximum acceleration of 5m/s2 in a workspace of 4x5x5m3 and maximum roll, pitch, yaw angles of ±40º, ±40º, ±5º respectively. Using winches with a total power of 384 kW allows for cable forces of 12000N and a maximum acceleration of 14 m/s2 at a payload of 200 kg. The bandwidth of the system varies between 10 and 14 Hz depending of the cable length and cabin position inside the workspace.

The simulator cabin design is optimized with regards to stability, cabin volume, and weight. Using an icosahedron truss structure allows to minimize the weight while maximizing the cabin volume, since it provides the optimal use of components regarding the tension flow through the structure. On the other side it provides an optimal relation of nodes and edges with regards to a sphere enclosure. Using carbon fiber rods for the edges and aerospace alloy for the nodes keeps the weight below 80 kg for the whole cabin without instrumentation. For the cable topology a cross over configuration as shown in the figures was chosen to maximize the pitch and roll capability of the cabin.

This is a 360-degree video version of a narrated VR-3D tour created for the University of Tübingen's "Origins" exhibition, taking visitors to the latest motion simulator, the Cable Robot Simulator in the Cyberneum of the MPI for Biological Cybernetics. The original VR Experience was developed for immersive viewing with HTC Vive VR glasses in the museum.This video was created by converting the VR content into 360-degree video data and is only part of the overall impression again.

A 360-Video-Tour to the MPI Cable Robot Simulator (CRS)

This is a 360-degree video version of a narrated VR-3D tour created for the University of Tübingen's "Origins" exhibition, taking visitors to the latest motion simulator, the Cable Robot Simulator in the Cyberneum of the MPI for Biological Cybernetics. The original VR Experience was developed for immersive viewing with HTC Vive VR glasses in the museum.This video was created by converting the VR content into 360-degree video data and is only part of the overall impression again.


CyberPod

With the CyberPod we can study the interplay between the human visual, auditory, vestibular and neuromuscular systems. The center piece of the CyberPod is a hexapod Stewart platform (Bosch eMotion 1500) with six degrees of freedom. Mounted on the motion-base is a 2.5 by 2.2 m aluminum platform. This platform was designed and produced in-house, with a focus on a high dynamic response requiring a high structural stiffness. For visualizations, the platform features a removable projection screen located 1.1 m away from the participant, resulting in a field of view of approximately 95°×53°. The ProPixx projector has a maximum resolution of 1920×1080 pixels (at 120 Hz) or a refresh rate of up to 500 Hz in RGB and 1440 Hz in grayscale (at reduced resolutions). It provides deterministic timing and is particularly suited for psychophysics studies. Passive and active stereo projection is possible. Instead of the projection screen, the platform can also be used with our head mounted display systems in conjunction with an optical tracking system (Optitrack V120:Trio) to measure head pose and optical marker locations at 120Hz. Auditory stimulation is provided using either a surround sound system or noise-cancelling headphones. The characteristics of the motion system have been objectively measured with a standardized approach. The system time-delay is below 20 ms, which is far below the required 150 ms for flight simulators. The system bandwidth depends on the exact mechanical configuration equipment mounted to the platform, but exceeds 30 Hz for the most basic configuration, consisting of a seat on the platform only.

Examples of fundamental research performed on the CyberPod simulator so far are a study on accumulation of sensory information over time in the perception of rotation, and an fNIRS neuroimaging study on decision making on self-motion. The CyberPod simulator is further used extensively for the development of Motion Cueing Algorithms based on Model Predictive Control methods. In one project, the simulator provides motion feedback cues to a driver in a racing car. The goal of this project is to investigate which cues are important and how an MPC-based cueing algorithm can present these cues. In another project, it is investigated whether humans use MPC-based motion feedback in a target-following and disturbance-rejection task.


CyberMotion Simulator

The CyberMotion Simulator (CMS) offers a unique facility that can accommodate many different perception, cognition and action experiments. The experimental studies that are conducted on the CMS include fundamental studies on human self-motion perception, motion cueing, spatial orientation, pilot modeling, tele-operation and EEG studies.

The CMS consists of an industrial robot arm with 6 independent axes, extended with an L-shaped cabin axis. The seventh axis allows for varying the orientation of the cabin with respect to the robot arm by changing the location of the cabin’s attachment point from behind the seat to under the seat, or any intermediate position. Recently, the CMS has been further extended with a linear axis of 10 meters. The resulting 8 degrees-of-freedom (DOF) provide an exceptionally large workspace. Several extreme motions and positions can be achieved, such as large lateral/longitudinal motions, sustained centrifugal motions, infinite head-centered rotation, and up-side-down motions. Such motions and positions cannot be achieved in conventional simulator architectures, which makes the CMS a unique test bed for studies into motion perception and spatial orientation.
To ensure the synchronous motion of the robot axes and the external axes (cabin and linear axis) custom software was developed. This software combines/distributes the signals coming from/going to the respective axes while making sure all timing requirements are met. Furthermore it also monitors the safety status of the system ensuring very high safety standards.
The cabin (including the cabin axis) was custom built, and allows for mounting a range of different input/control devices:

  • Button boxes (custom built)
  • Pointing devices (custom built)
  • Car steering wheel and pedals (Sensodrive GmbH)
  • Helicopter collective (Wittenstein aerospace & simulation GmbH)
  • Helicopter cyclic (Wittenstein aerospace & simulation GmbH)
  • Side-stick (Wittenstein aerospace & simulation GmbH)Side-stick (Wittenstein aerospace & simulation GmbH)


The cabin is equipped with a stereo projection system (eyevis GmbH) with a field-of-view (FOV) of 140ºH x 70ºV and a resolution of 1920 x 1200 on each projector. High fidelity 3D visualization can be achieved with projection filters and glasses (Infitec).

The CyberMotion Simulator


Large Tracking Hall (Tracking Lab)

The free space walking and tracking laboratory in the Cyberneum is a large (12.7m x 11.9m x 6.9m) empty space equipped with 26 high-speed motion capture cameras from Vicon. In December 2014 the existing setup of 16 Vicon MX13 cameras (1.2 Megapixels) was expanded with 10 Vicon T160 cameras (16 Megapixels) to double the tracking volume for aerial robotics research and to improve head tracking quality for immersive VR research. This tracking system allows us to capture the motions of one or more persons by processing the images of configurations of multiple infrared reflective markers in real-time. The calculated position and orientation information can be transmitted wirelessly to a high-end mobile graphics system that updates the simulated virtual environment according to the person's position and is able to generate a correct egocentric visualization and/or auditory simulation. Participants can navigate freely within the entire area of the tracking hall by either wearing the backpack or having the experimenter wear the backpack and follow them around in the walking room. In order to suppress any interference between real and simulated environment as far as possible, the laboratory is completely dark (black with the ability to block out all light) and acoustic panels around the walls largely reduce acoustic reverberations. The tracking setup also allows for tracking multiple objects such as flying quadcopters as well as for full-body motion capture (e.g. for analysis of sports performance, i.e. gymnastics, or for animation of virtual characters).


Multi Robot Tracking Lab (MultiAgent Lab)

A tracking hall of size 7.55m x 6.18m x 3.5m and equipped with tracking capabilities (6 VICON® Bonita cameras, software Vicon® Tracker) is used as main testing space for our multi-robot research. The room hosts also several PCs and the whole communication infrastructure to allow decentralized multi-robot algorithms to be implemented and tested, with the possibility of obtaining ground-truth data from the external VICON system. In addition, the room hosts many tools and sensors to work with UAVs, as for example a platform with a six-axis force/torque sensor and tools for the calibration of the IMU.


Quarter-sphere Large Screen Projection (Pano Lab)

We have initially employed a large screen, half-cylindrical virtual reality projection system to study human perception since 1997. Studies in a variety of areas have been carried out, including spatial cognition and the perceptual control of action. In 2005, we made a number of fundamental improvements to the virtual reality system. Perhaps the most noticeable change is an alteration of the screen size and geometry. This includes extending the screen horizontally (from 180 to 230 degrees) and adding a floor screen and projector. It is important to note that the projection screen curves smoothly from the wall projection to the floor projection, resulting in an overall screen geometry that can be described as a quarter- sphere. Vertically, the screen subtends 125 degrees (25 degree of visual angle upwards and 100 degrees downwards from the normal observation position).

In 2011 the image generation and projection setup was significantly updated. The existing four JVC SX21 DILA projectors (1400x1050) and curved mirrors were replaced with six EYEVIS LED DLP projectors (1920x1200), thereby simplifying the projection setup and increasing the overall resolution. In order to compensate for the visual distortions caused by the curved projection screen, as well as to achieve soft-edge blending for seamless overlap areas, we have developed a flexible warping solution using the new warp and blend features of the NVIDIA Quadro chipsets. This solution gives us the flexibility of a hardware-based warping solution and the accuracy of a software-based warping. The necessary calibration data for the image warping and blending stages is generated by a new camera-based projector auto-calibration system (DOMEPROJECTION.COM). Image generation is handled by a new high-end render cluster consisting of six client image generation PCs and one master PC. To avoid tearing artifacts resulting from the multi-projector setup, the rendering computers use frame-synchronized graphics cards to synchronize the projected images.

In addition to improving the visual aspects of the system, we increased the quality, number, and type of input devices. Participants in the experiments can, for example, interact with the virtual environment via actuated Wittenstein helicopter controls, joysticks, a space mouse, steering wheels, a Go-Kart, or a virtual bicycle (VRBike). Furthermore a Razer Hydra 6DOF joystick can be used for wand navigation and small volume tracking. Some of the input devices offer the possibility of force-feedback. With the VRBike, for example, one can actively pedal and steer through the virtual environment, and the virtual inertia and incline will be reflected in the pedals' resistance.


Back-Projection Large Screen Display (Backpro Lab)

This setup consists of a single SXGA+ projector (Christie Mirage S+3K DLP) and a large, flat screen (2.2m x 2m high). The projector is mounted behind the screen to create a shadow-free projection environment. The projector has a high contrast ration of 1500:1 and can be used for mono or active stereo projections. An Advanced Realtime Tracking (ART) optical tracking system can track up to four rigid-body objects at 60Hz and can directly output the calculated positions and orientations via UDP network stream.

NVIDIA 3DVision Pro active shutter-glasses are used for stereo-projection experiments. These glasses use RF technology for synchronization and therefore can also be used in conjunction with the infrared-based optical tracking system. The glasses have been modified with markers for the optical tracking system and can be used for head tracking.


Fixed-based flight simulator (Heli Lab)

Heli Lab is a fixed-based flight simulator that affords a large field of view (i.e., 105°x100°). It is equipped to measure explicit and implicit behavioral responses — respectively, control stick inputs as well as eye-tracking and physiological measures. Thus, we are able to study the relationship between a pilot's actions and his cognitive workload during flight maneuvers.

The core system is an open-source flight simulator (FlightGear, www.flightgear.org) that accepts control inputs that are processed by a designated aircraft model to compute the appropriate world position and orientation of a modelled aircraft. Subsequently, these values are used to render the corresponding display of the world scene as seen from the cockpit, via a computing cluster for 10 wide-screen monitors.

Our system is equipped to record implicit behavioral responses. A remote eyetracking system (2 stereo-heads, 60 Hz; Facelab, Seeing Machines, USA) monitors the pilot's line-of-sight in the world scene as well as gaze on the heads-down instrument panel. Physiological measurements of the pilot are also recorded in tandem using a 32-channel active electrode system (g.Tec Medical Engineering GmbH, Austria). This system can be used to monitor the pilot's galvanic skin response, heart-rate variability and electro-encephalographic (EEG) signals.

The simulator is equipped with an unactuated control system that features generic helicopter controls with a cyclic stick, a collective stick, and pedals. Furthermore, we use a Thrustmaster Warthog sidestick (Thrustmaster, USA) with precise sensors to perform our closed-loop control tasks.


Control Loading Lab

In the Control Loading Lab, we perform experimental evaluation to understand human behaviour in manual control tasks and to investigate novel approaches for human-machine interfaces. For this purpose, we use a fixed-base simulator with a control loaded sidestick, cyclic, collective and pedals from Wittenstein GmbH, Germany. These devices can simulate highly accurate control dynamics over a large frequency range and can be used to provide haptic feedback cues to the participant. The input devices are combined with a VIEWPixx display from VPixx Technologies, Canada, which can present stimuli at 120 Hz with accurate timing characteristics. Therefore, this lab provides an optimal environment for human-in-the-loop experiments.

Go to Editor View