Virtual Reality: Equipment

Virtual Reality: Equipment

Virtual Reality equipment enables scientists to provide sensory stimulus in a controlled Virtual World and to manipulate or alter sensory input which would not be possible in the real world.

The following equipment is mobile and can be used in any of the facilities and with any of our software platforms. In special cases where we have integrated this facility into a specific lab we have highlighted this accomplishment under the facilities descriptions.


Unmanned Aerial Vehicles (UAVs)

Multi-robot research is conducted by the group by exploiting Unmanned Aerial Vehicles (UAV - quadrotors and octorotors) as experimental testbeds for implementing and evaluating advanced complex strategies for multi-robot estimation and control, with a particular focus on decentralized solutions. The UAVs used by the group are based on a commercial product from the German company MikroKopter, and have been then modified both in their hardware and control software to suit our particular needs. Absolute position/orientation measurements are obtained form an external tracking system (VICON), although several quantities are also estimated onboard exploiting accelerometers, gyroscopes, onboard cameras and RGB-D sensors. Some UAVs are retrofitted with Hardkernel Odroid XU or Zotac Jetson TK1 boards, which support a complete Linux operative system and some graphic acceleration capabilities, in order to provide more computational power and more autonomy to the robots.

The Inspire 1 (DJI, Model T600) is a ready-to-fly commercial UAV with a wide angle camera (12 MP RAW images or 1080p60/2160p30 video) mounted to a Zenmuse X3 3-axis gimbal that provides excellent stabilization. Thanks to its rich set of sensors (GPS, IMU, sonar, vision-based) it can be easily controlled after minimal training and can fly even on auto-pilot. In combination with a dual controller setup, it is an ideal vehicle for easy aerial photography. Initial tests for dense 3D reconstruction of the MPI campus for VR and robotics purposes using photogrammetry have been conducted.


Haptic Devices

In order to provide operators with a force feedback during execution of some task, we use two haptic force feedback devices from Force Dimension, called Omega.3 and Omega.6. The Omega.3 consists of three motors and three position sensors. Depending on the Cartesian position of the end effector, a programmable Cartesian force can be applied to the users’ hand, thereby allowing the force-feedback possibility. The Omega.6 differs from the former device in the additional 3 measured (but not actuated) rotational degrees of freedom of the end-effector. Similar capabilities with respect to the Omega.3 are provided by a more portable device from Sensable, called Phantom Omni. Given its reduced size, the Phantom Omni can be used in portable haptic tele-operation setups.


Head-Mounted Displays

We currently use the following types of head-mounted displays (HMDs). They offer different levels of resolution, weight and field-of-view (FOV):

3x Oculus Rift CV1
  • FOV: up to 110 (circular view)
  • Resolution: <1080x1200 per eye, 90Hz
  • Lens IPD adjustment
5x HTC Vive
  • FOV: up to 110 (circular view)
  • Resolution: <1080x1200 per eye, 90Hz
  • Lens IPD adjustment
4x GearVR
  • FOV: up to 100 (circular view)
  • Resolution: <1280x1440 per eye, 60Hz
1x HTC Vive with integrated SMI eyetracking
  • 250Hz binocular eyetracking
1x Oculus Rift DK2 with integrated SMI eyetracking
  • 60Hz binocular eyetracking
1x HTC Vive with integrated SMI eyetracking
  • 250Hz binocular eyetracking
6x Oculus Rift DK2
  • FOV: 100 (nominal, circular view)
  • Resolution: <960x1080 per eye
2x HoloLens Mixed Reality Display
  • FOV: 30 (horizontal)
  • Resolution: <1280x720 per eye, 60Hz


Oculus Rift CV1 VR HMD units are used for walking experiments in the TrackingLab. We have developed a mobile experiment solution which uses MSIVR backpacks (MSI) to generate the graphics for the headset with a fast desktop GPU (NVIDIA GTX 1070). Head pose is calculated by sensor fusing the onboard IMU of the CV1 with external reference data from the Vicon system (position and reference orientation) to provide smooth drift-free low-latency head orientation. The experimenter is able to see the same view as the subject without impacting rendering performance on the backpack PC. This is achieved by using hardware video encoding on the VR backpack and decoding it using SteamLink hardware (Valve).Oculus Rift CV1 VR HMD units are used for walking experiments in the TrackingLab. We have developed a mobile experiment solution which uses MSIVR backpacks (MSI) to generate the graphics for the headset with a fast desktop GPU (NVIDIA GTX 1070). Head pose is calculated by sensor fusing the onboard IMU of the CV1 with external reference data from the Vicon system (position and reference orientation) to provide smooth drift-free low-latency head orientation. The experimenter is able to see the same view as the subject without impacting rendering performance on the backpack PC. This is achieved by using hardware video encoding on the VR backpack and decoding it using SteamLink hardware (Valve).

HTC Vive VR HMD allows room scale tracking of areas of up to 4.5m x 4.5m using its own tracking system. It consists of two laser emitter base stations which can be used to turn an unused office or lab space into a small VR lab. It comes with tracked hand controllers which make it a good choice for experiments where user body and position movement is needed. Additional body parts can be tracked with the separate Vive Tracker puck units.

Samsung GearVR HMDs are providing a completely tetherless VR solution where a Samsung smartphone provides the high resolution display as well as the graphics rendering capability. We use the same sensor fusion approach as our CV1 setup to use these devices for lightweight large-room scale VR in the TrackingLab.

For HMD experiments where eyetracking information is needed we can use the SMI HTC Vive or SMI DK2 units which offer integrated eyetracking and a plugin for integration into existing Unity experiments.

Microsoft HoloLens is a mobile tetherless Mixed Reality HMD which offers inside-out 6DOF head tracking and the ability to overlay computer generated 3D content using a see-through display. The inside-out tracking system allows it to be used in many locations without an external tracking system. Due to the limited FOV it is however not suitable for all visual stimuli.


Alternative Reality Headset

The Alternative Reality Headset was designed with the aim to present participants with visual roll-tilt stimuli that have the highest degree of ecological validity possible. To realize this, we mounted a stereo camera via a servo motor to a Head-Mounted-Display (HMD). The axis of rotation is aligned with the naso-occipital axis, and the image captured by the cameras is fed through to the HMD screens.

The Alternative Reality Headset consists of a HTC Vive VR headset (HTC, New Taipei City, Taiwan), with a resolution of 1080x1200 per eye and a refresh rate of 90 Hertz. An OVRVision Pro stereo camera (Wizapply, Osaka, Japan) is attached to the front of the headset via a Dynamixel AX12A servo (Robotis, Lake Forest, California, United States) that allows rotating the stereo camera by up to ±150°. The camera resolution and framerate can be adjusted. Positional information provided by the HTC Vive Lighthouse positional tracking system can be used to correct the camera angle for head rotations.


Treadmills

Locomotion interfaces (treadmills) are an important tool to study the complex interaction between action and perception during walking. This is possible on two different locomotion interfaces. Large linear treadmill The linear treadmill setup consists of three main components: the treadmill itself (Bonte Technology, Netherlands), a four camera Vicon® MX-13 optical tracking system, and a visualization system that displays 3D graphics in a head-mounted display (see Figure). All three components are controlled by separate dedicated computers. The linear treadmill measures 6 x 2.4 m (L x W) and is capable of speeds up to 40 km/h. It is controlled from a pc via a CANbus connection. The treadmill can be controlled in either open loop or closed loop mode. For the closed loop control, the position of the user on the treadmill is measured with the Vicon® system, which tracks infrared-reflecting markers on the helmet worn by the user. Based on the position of the helmet and its change over time the speed of the treadmill is adjusted in order to keep the user on the treadmill. The Vicon® data are also used to update the visual input, matching the head movements made by the user. The large linear treadmill is located adjacent to the multi-agent lab.


Omnidirectional treadmill

The omnidirectional treadmill allows for near-natural walking through arbitrarily large virtual environments and that can be used for basic research. The basic mechanism consists of 25 belts (0.5m wide) that are mounted on two big chains. The chains constitute one motion direction (up to 2 m/s), while the belts run in the orthogonal direction (up to 3 m/s). Together, they can generate motion in any direction. The chains move 7 tons, driven by 4 large 10 kW frequency-coupled engines. The treadmill measures 6.5 x 6.5 x 1.45m (LxWxH), with an active walking surface of 4.0 x 4.0m. Position of the person on the treadmill is measured with a Vicon® tracking system and used to adjust the treadmill velocity as a function of walking speed. Together with a head-mounted display the treadmill system allows users to walk through infinitely large virtual environments in a natural way. The omnidirectional treadmill is located under a false floor in the large tracking lab.


Body Motion Capture and Animation Technology

For full-body motion capture we use three different setups. One setup consists of a leightweight lycra suit with attached reflective markers which are captured and processed with Vicon Blade software. For post-processing and animation of body parts or the full body we use Autodesk 3ds Max, Autodesk Maya and Autodesk Motion Builder. The second and third setups are used for real-time motion capture and animation. The second setup uses up to two Xsens MVN Suits, consisting of 17 MTx inertial measurement units each. The MVN Biomech software allows for additional synchronized 60fps video recordings with an external camera. It offers integrated data visualization of joint angles and accelerations as well as direct export and streaming into game engines like Unity. The third setup uses the HTC Vive Lighthouse tracking system to track hand controllers and additional Vive Tracker units which can be used to animate the body in real-time using inverse kinematics software.

Avatars models used for experiments are part of the Rocketbox Studios GmbH Complete Characters Library. Through the work of the independent research group, Space and Body Perception (Betty Mohler) in collaboration with the Max Planck Institute for Intelligent Systems (Michael Black) we can are also able to create personalized self-avatars for our experiments. We currently do so for patient populations where we investigate distortions in body image.


Research Management Tool

We use an in-house initiated online research management tool called banto for managing (e.g. booking) equipment and recruiting participants. Banto significantly facilitates the recruitment of participants and the booking of equipment for experiment appointments. In addition, the software provides several useful features such as the possibility to give credits for participation, sending appointment reminders, or excluding participants from the recruitment process. Banto is written in php and requires mysqli database and a server. It is freely available to the research community (https://banto.co).

Go to Editor View