Virtual Reality Equipment

Head Mounted Displays

For the display device we currently have five types of head-mounted displays (HMDs) which offer us different levels of visual resolution, weights and field-of-view.
First, we have four light-weight eMagin Z800 3DVisorHMDs which offer a geometric field of view of approximately 32x24 degrees at a resolution of 800x600 pixels per eye. They each weigh 0.25kg and are specifically useful for multi-user experiments where it is desirable to control the visual input for multiple participants. Second, we have the nVisor SX60 HMD which has a field of view of approximately 44x35 degrees with a resolution of 1280x1024 per eye and weighs approximately 1.7kg. This HMD has greater resolution but weighs more and therefore the experimenter usually wears the backpack which holds the graphics/laptop for the HMD. Similarly, the nVisor SX111 HMD provides an even larger field of view of approximately 102x64 degrees with a resolution of 1280x1024 per eye. But this HMD also weights about 2kg and we only use this HMD for very short experiments (less than 30 minutes) and ideally for experiments that do not involve a lot of movement on the part of the participant. Finally, we have a Kaiser SR80 ProView HMD which has a field of view of 63x53 degrees with a resolution of 1280x1024 and weighs 0.79g. This HMD has high quality visual resolution and field of view and is quite light-weight; however it does not have battery power and so is not currently used for mobile experiments. Instead we currently use the Kaiser HMD for experiments where the participant is always in the same location (e.g. seated or walking on a treadmill). For the former HMDs (eMagin and nVisor, all further technical components (i.e. laptop, video signal splitter controller, power supply) are mounted on a backpack. When using an HMD within a limited area, for example, on the omnidirectional treadmill, the HMDs can be connected from the ceiling thus allowing to connect to a computer with better graphics capabilities in the control room.We also have the xSight 6123 that presents an image of 1920 × 1200 pixels in front of each eye across a FoV of 118
horizontally and 45 in total while the weight does not exceed 400 g [Sensics Inc., 2010]. We have used this HMD with the MPS Cybermotion Simulator.

Real-time Rendering Technologies for Virtual Reality Experiments

In the past a lot of our Virtual Reality (VR) experiments were developed by using real-time rendering technologies like the OpenGL C API or the open source OGRE C++ rendering engine. Our inhouse developed C++ libraries (veLib, XdevL) were used to help with VR hardware integration such as head tracking and stereoscopic output for head-mounted displays. This approach generally required good C/C++ programming experience to write a VR experiment and only supported simple visualization possibilities.
In order to facilitate the development of new graphically rich VR experiments in our different VR hardware setups we now also support several commercial game engine technologies. They help to hide  the technical complexities of creating a real-time VR experiment and provide state-of-the art visual effects such as real-time lighting and shadows,  as well as a high scalability so that also crowds of animated avatars can be rendered in real-time. Furthermore they also make it easier to transition an experiment between different VR hardware setups which is very often required for follow-up studies.
We currently support and use the following game engines for our VR experiments:

Virtools

For us the main benefit of 3DVIA Virtools is, that scientists with limited or no programming experience can create an experiment using the included visual editor. Virtools already contains built-in support for several VR devices like the Vicon tracking system as well as the ability to support cluster-rendering which is required for our Panolab setup. We enhanced Virtools functionality with the following inhouse developed components to support our experiments:
  •  Xsens MVN inertial motion capture suit: We can receive the motion data and map it onto an avatar in real-time. This component uses our inhouse XdevL library for UDP networking.
  • Generic UDP receiver: Used to interface Virtools with our simulation environments such as the F1 simulator on the Cyber Motion Simulator. Please watch the video (F1-Simulator) on the right hand side for demonstration.
  • Real-time warping: Used to compensate for lens distortions in our head mounted displays
  • Advanced sound engine: Uses FMOD Ex for advanced sound effects
We support Virtools in all of our VR setups.

Unity

Unity experiments are developed with one of the three integrated scripting languages (JavaScript, C# and Boo/Python). Some basic programming experience is recommended here for our users which usually get quite productive after a short time of learning. Another main advantage of Unity is the possibility to create experiments also for mobile devices running Apple iOS or Android. Created experiments can also be shared with collaborators without the need for any special software license.
In contrast to Virtools, Unity does not have integrated support for VR devices. For this reason we are currently using a commercial middleware solution (MiddleVR, i’m in VR) which provides us with most of the necessary VR components. This  includes VRPN Vicon tracking system integration, asymmetric frustum rendering, multiple stereo viewports and also cluster rendering.
 
We also enhanced the Unity functionality with the following inhouse developed components to support our experiments:
  •  Xsens MVN inertial motion capture suit: We can receive the motion data and map it onto an avatar in real-time. We are also supporting direct playback of the .mvnx file format. It is furthermore possible to recalibrate the MVN avatar data to a defined start position and orientation for consistent presentation.
  • Generic UDP receiver: Used to interface Unity with our simulation environments such as the Cyber Motion Simulator
  •  Adaptive Staircase: Implementation of an adaptive staircase method (Levitt, 1971)
We support the Unity game engine in all of our VR setups.

Unreal Engine

We also investigate the use of the Unreal Engine which is part of the free Unreal Development Kit (UDK). This engine provides a very high visual fidelity and scalability but is also much harder to learn than Virtools or Unity. We enhanced the UDK functionality with the following inhouse developed components:
  • VRPN receiver: Used to receive head position/orientation from Vicon tracking system
  • Generic UDP receiver: Used to interface Unity with our simulation environments.
  •  Custom stereo-rendering solution: Support for SX60 and SX111 HMDs.
Previous work includes the use of the Unreal Engine for helicopter flight control visualization. Please find a demonstration movie (HeliDemo_UDK) in the video section to the right.

We are currently only supporting the Unreal Engine in a very limited amount of setups since – similar to Unity – it does not directly support VR hardware.

Body Motion Capture and Animation Technology

For full-body motion capture we use two different setups. One setup consists of a leightweight lycra suit with attached reflective markers which are tracked and processed with the Vicon® IQ software. After the capturing process, the data can be post-processed for the desired need. The second setup is used for real-time motion capture and animation. We have two Xsens MVN Suits, consisting of 17 MTx inertial measurement units each. Custom-built plugins enable the use of these suits for real-time animation (e.g. of virtual avatars). For post-processing and animation of body parts or the full body we use Autodesk 3ds Max, Autodesk Maya and Autodesk Motion Builder. The avatars generally used for experiments involving motion capture and animation are part of the Rocketbox Studios GmbH Complete Characters Library.

Haptic Devices

Table-Top Virtual Workbench
Tactile Slip Force Display
Tactile Shear Force Display
VirTouch Mouse
Omega.3 & Omega.6
In order to provide operators with a force feedback during execution of some task, we use two haptic force feedback devices from Force Dimension (Switzerland), called Omega.3 and Omega.6. The Omega.3 consists of three motors and three position sensors. Depending on the Cartesian position of the end effector, a programmable Cartesian force can be applied to the users’ hand, thereby allowing the force-feedback possibility. The Omega.6 differs from the former devise in the additional 3 measured (but not actuated) rotational degrees of freedom of the end-effector.

 
Phantom Haptic Feedback Devices
In order to simulate objects that can also be touched we use haptic force feedback devices, called PHANToM. The PHANToM consists of three motors and three position sensors. Depending on the position of the end effecter a force can be applied to the users’ finger and thereby simulation the resistance of e.g., a wall or an object that can be picked up.

Table-Top Virtual Workbench
For the visual-haptic simulations we use a table-top virtual workbench consisting of a computer monitor (CRT) mounted upside down above a mirror. When observers look in the mirror stereoscopically rendered objects appear to float above the table. We use two PHANToMs placed below the mirror to provide haptic feedback to the hand of an operator. This setup enables us
to investigate many aspects of haptic and visual information processing because the visual and haptic scene can be controlled separately.

Integrated Kinesthetic and Tactile Feedback Devices
This modular integrated haptic interface is based on a high force, hyper redundant kinesthetic (10 DoF) display called: ViSHaRD 10 and which was developed within the TOUCH-HapSys European project by the LSR Department (Prof. Martin Buss) at the Technical University in Munich. It provides a large cylindrical workspace of ø 1.7 m × 0.6 m and a maximum payload of 7kg, which is sufficient to attach additional tactile displays. At the moment three tactile displays are available to be connected with ViSHaRD 10:

Tactile Slip Force Display
This display is based on a rotating ball with a diameter of 60.2 mm. This ball is supported by an arrangement of ballbearings and rotated by two servo motors. With these two servomotors, arranged orthogonal at the balls equator, it is possible to generate the sensation of slip force in any lateral direction on the finger or any other body part. This device was developed here at the MPI.

Tactile Shear Force Display
This display is able to provide individual force stimuli tangential to the surface of the human skin in the area of the index finger tip. It consists of four pins movable laterally to the skin in any direction with a amplitude of 2 mm. This device was developed within the TOUCH-HapSys project by the LSR Department (Prof. Martin Buss) at the Technical University in Munich.

VirTouch Mouse
the index, theThis commercial available computer mouse based display contains three Braille generator modules for middle finger, and the ring finger of the operator’s hand. Each Braille generator module consists of a dot matrix array in 4 × 8 configuration. Each of the 96 pins is movable independently in normal direction toward the skin of the operator’s fingertip. The range of movement is 1 mm per pin, divided into 16 incremental steps.

Galvanic Vestibular Stimulation (GVS)

GVS is used to stimulate the human vestibular system by injecting small currents behind the ears of a person. Produced by Good Vibrations (Toronto, Canada) it consists of a small box designed to be fastened to a person’s body with 4 leads protruding outward used to attach behind the ears. The newly acquired GVS system will be used in conjunction with the MPI Stewart Platform and the MPS Cyber Motion Simulator to investigate self motion perception with the potential of virtually expanding the usable workspace of these devices. The GVS system will also be used with the tracking hall and the omnidirectional treadmill to enhance redirected walking techniques and to induce out-of-body experiences in virtual environments

 

Biosignal Recording & Brain Computer Interface

Biosignal (EEG, EOG, ECG, EMG) acquisition allows investigation of brain-, heart- and muscle-activity, eye movements, respiration, galvanic skin response and many other physiological and physical parameters.  Produced by g.tec medical engineering (Schiedlberg, Austria) it consists of a 16-channel biosignal amplifier (up to 256 channels supported) as well as a portable 8 channel amplifier which enables data acquisition during free movement. The newly acquired g-tec system has been used in conjunction with the MPI Stewart Platform and will in future be used with the MPS Cyber Motion Simulator to investigate biosignal responses to self-motion. High-speed online processing of the g-tec system under MATLAB SIMULINK enables brain computer interfacing. At present the system is capable of controlling cursor movement on a display screen in real-time after training the computer on subject specific activation patterns. Plans to extend this to interfacing the g-tec system with the control computer of the MPI Cyber Motion Simulator will potentially enable the user to control self-motion by monitoring differential activation of the sensorimotor cortex using a motor imagery paradigm.

Treadmills

Linear Treadmill
Omnidirectional Treadmill
CyberCarpet
Robotic Wheelchair
Treadmills are an important tool to study the complex interaction between action and perception during walking. Within the EU-funded CyberWalk research project, we have set up two different treadmills.

Large Linear Treadmill

The linear treadmill setup consists of three main components: the treadmill itself, a four camera Vicon optical tracking system, and a visualization system that displays 3D graphics in a headmounted display. All three components are controlled by separate dedicated computers. The linear treadmill measures 6 x 2.4 m (L x W) and is capable of speeds up to 40 km/h. It is controlled from a PC via a CANbus connection. The treadmill can be controlled in either open loop or closed loop mode. For the closed loop control, the position of the user on the treadmill is measured with the Vicon system, which trackes infraredreflecting markers on the helmet worn by the user. Based on the position of the helmet and its change over time the speed of the treadmill is adjusted in order to keep the user on the treadmill. The Vicon data are also used to update the visual input, matching the head movements made by the user.


Omnidirectional Treadmill
Together with CyberWalk partners from the Technical University in Munich, from the University “La Sapienza” of Rome and from the Swiss Federal Institute of Technology in Zürich, we have developed a revolutionary omnidirectional treadmill. It is the first omnidirectional treadmill in the world that allows for near-natural walking through arbitrarily large virtual environments and that can be used for basic research. The basic mechanism consists of 25 belts (0.5m wide) that are mounted on two big chains. The chains constitute one motion direction (up to 2 m/s), while the belts run in the orthogonal direction (up to 3 m/s). Together, they can generate motion in any direction. The chains move 7 tons, driven by 4 large 10 kW frequency-coupled engines. The treadmill measures 6.5 x 6.5 x 1.45m (LxWxH), with an active walking surface of 4.0 x 4.0m. Position of the person on the treadmill is measured with a Vicon tracking system and used to adjust the treadmill velocity as a function of walking speed. Together with a head-mounted display or panoramic projection screen (planned), the treadmill system allows users to walk through large virtual environments in a natural way.
 
CyberCarpet
The ball bearing CyberCarpet was the first prototype of an omnidirectional treadmill developed within the CyberWalk research project. It is based on a ball array design and combines the different components envisioned in the CyberWalk project: markerless position tracking, optimal control and omnidirectional capabilities. Upscaling to the desired size for real walking (at least 4x4 m) turned out to be difficult, however. Currently, this prototype is still used for demonstrations and as a testbed for different controllers and image-based tracking algorithms. The CyberCarpet device consists of a conveyor belt, mounted on top of a turntable. This turntable is actuated by a servo motor via a toothed belt. Thus, the platform has two degrees of freedom, namely a linear and a rotational one. Rotations and linear movements are transferred to the  user by means of an array of balls. These balls are mounted within the top surface and are passively driven by the conveyor belt. The system was tested with a model car, representing a person walking on the platform. GPS data from real walking was used to define movement trajectories.  Position of the car was estimated by markerless tracking and benchmarked against triangulation by means of strings.
 
Robotic Wheelchair
The Robotic wheelchair (BlueBotics, Lausanne, Switzerland) can be used to transport people who weigh up to 150kg. The wheelchair can rotate about the center of the person’s head and can translate at a speed up to 5 km/h. In addition, the wheelchair has a built-in laser scanner that can help to determine the location of the wheelchair within a predefined space. The robotic wheelchair was modified with an ergonomic seat (Recaro, Kirchheim unter Teck, Germany) and can be used either autonomously with ANT® navigation or manually driven with a standard wheelchair joystick. Experimenters can therefore have wireless control over the behavior of the wheelchair, while in all cases participants have access to an emergency stop button near their left hand.
 


Last updated: Mittwoch, 01.06.2016