Looking for Participants

The MPI for Biological Cybernetics is looking for participants for some of their research experiments [more].

Most recent Publications

Guest JM, Seetharama MM, Wendel ES, Strick PL and Oberlaender M (January-2018) 3D reconstruction and standardization of the rat facial nucleus for precise mapping of vibrissal motor networks Neuroscience 368 171-186.
Zaretskaya N, Fischl B, Reuter M, Renvall V and Polimeni JR (January-2018) Advantages of cortical surface reconstruction using submillimeter 7 T MEMPRAGE NeuroImage 165 11-26.
Ardhapure AV, Sanghvi YS, Borozdina Y, Kapdi AR and Schulzke C (January-2018) Crystal structure of 8-(4-methyl­phen­yl)-2′-de­oxy­adenosine hemihydrate Acta Crystallographica Section E: Crystallographic Communications 74(1) 1-5.
Meilinger T, Garsoffky B and Schwan S (December-2017) A catch-up illusion arising from a distance-dependent perception bias in judging relative movement Scientific Reports 7(17037) 1-9.
Venrooij J, Mulder M, Mulder M, Abbink DA, van Paassen MM, van der Helm FCT and Bülthoff HH (December-2017) Admittance-Adaptive Model-Based Approach to Mitigate Biodynamic Feedthrough IEEE Transactions on Cybernetics 47(12) 4169-4181.


Real-time Rendering Technologies for Virtual Reality Experiments

In the past a lot of our Virtual Reality (VR) experiments were developed by using real-time rendering technologies like the OpenGL C API or the open source OGRE C++ rendering engine. Our inhouse developed C++ libraries (veLib, XdevL) were used to help with VR hardware integration such as head tracking and stereoscopic output for head-mounted displays. This approach generally required good C/C++ programming experience to write a VR experiment and only supported simple visualization possibilities.
In order to facilitate the development of new graphically rich VR experiments in our different VR hardware setups we now also support several commercial game engine technologies. They help to hide  the technical complexities of creating a real-time VR experiment and provide state-of-the art visual effects such as real-time lighting and shadows,  as well as a high scalability so that also crowds of animated avatars can be rendered in real-time. Furthermore they also make it easier to transition an experiment between different VR hardware setups which is very often required for follow-up studies.
We currently support and use the following game engines for our VR experiments:


For us the main benefit of 3DVIA Virtools is, that scientists with limited or no programming experience can create an experiment using the included visual editor. Virtools already contains built-in support for several VR devices like the Vicon tracking system as well as the ability to support cluster-rendering which is required for our Panolab setup. We enhanced Virtools functionality with the following inhouse developed components to support our experiments:
  •  Xsens MVN inertial motion capture suit: We can receive the motion data and map it onto an avatar in real-time. This component uses our inhouse XdevL library for UDP networking.
  • Generic UDP receiver: Used to interface Virtools with our simulation environments such as the F1 simulator on the Cyber Motion Simulator. Please watch the video (F1-Simulator) on the right hand side for demonstration.
  • Real-time warping: Used to compensate for lens distortions in our head mounted displays
  • Advanced sound engine: Uses FMOD Ex for advanced sound effects
We support Virtools in all of our VR setups.


Unity experiments are developed with one of the three integrated scripting languages (JavaScript, C# and Boo/Python). Some basic programming experience is recommended here for our users which usually get quite productive after a short time of learning. Another main advantage of Unity is the possibility to create experiments also for mobile devices running Apple iOS or Android. Created experiments can also be shared with collaborators without the need for any special software license.
In contrast to Virtools, Unity does not have integrated support for VR devices. For this reason we are currently using a commercial middleware solution (MiddleVR, i’m in VR) which provides us with most of the necessary VR components. This  includes VRPN Vicon tracking system integration, asymmetric frustum rendering, multiple stereo viewports and also cluster rendering.
We also enhanced the Unity functionality with the following inhouse developed components to support our experiments:

  •  Xsens MVN inertial motion capture suit: We can receive the motion data and map it onto an avatar in real-time. We are also supporting direct playback of the .mvnx file format. It is furthermore possible to recalibrate the MVN avatar data to a defined start position and orientation for consistent presentation.
  • Generic UDP receiver: Used to interface Unity with our simulation environments such as the Cyber Motion Simulator
  •  Adaptive Staircase: Implementation of an adaptive staircase method (Levitt, 1971)
We support the Unity game engine in all of our VR setups.

Unreal Engine

We also investigate the use of the Unreal Engine which is part of the free Unreal Development Kit (UDK). This engine provides a very high visual fidelity and scalability but is also much harder to learn than Virtools or Unity. We enhanced the UDK functionality with the following inhouse developed components:
  • VRPN receiver: Used to receive head position/orientation from Vicon tracking system
  • Generic UDP receiver: Used to interface Unity with our simulation environments.
  •  Custom stereo-rendering solution: Support for SX60 and SX111 HMDs.
Previous work includes the use of the Unreal Engine for helicopter flight control visualization. Please find a demonstration movie (HeliDemo_UDK) in the video section to the right.

We are currently only supporting the Unreal Engine in a very limited amount of setups since – similar to Unity – it does not directly support VR hardware.
Last updated: Friday, 14.10.2016