Looking for Participants

The MPI for Biological Cybernetics is looking for participants for some of their research experiments [more].
 

Most recent Publications

Spetter MS, Malekshahi R, Birbaumer N, Lührs M, van der Veer AH, Scheffler K, Spuckti S, Preissl H, Veit R and Hallschmid M (May-2017) Volitional regulation of brain responses to food stimuli in overweight and obese subjects: a real-time fMRI feedback study Appetite 112 188–195.
Nestmeyer T, Robuffo Giordano P, Bülthoff HH and Franchi A (April-2017) Decentralized simultaneous multi-target exploration using a connected network of multiple robots Autonomous Robots 41(4) 989–1011.
Song H, Ruan D, Liu W, Stenger VA, Pohmann R, Fernandez Seara MA, Nair T, Jung S, Luo J, Motai Y, Ma J, Hazle JD and Gach HM (April-2017) Respiratory motion prediction and prospective correction for free-breathing arterial spin labeled perfusion MRI of the kidneys Medical Physics 44(3) 962–973.
Chang P, Nassirpour S and Henning A (March-2017) Modeling real shim fields for very high degree (and order) B0 shimming of the human brain at 9.4 T Magnetic Resonance in Medicine Epub ahead.
Hong A, Lee DG, Bülthoff HH and Son HI (March-2017) Multimodal feedback for teleoperation of multiple mobile robots in an outdoor environment Journal on Multimodal User Interfaces 11(1) 67–80.

 

Virtual Reality Facilities

Foto: Gehirn&Geist/Manfred Zentsch
In the Department Human Perception, Cognition and Action we study the human perception with the help of virtual reality (VR). This enables us to conduct our experiments in controlled and yet natural surroundings. For this we have special hardware and experimental constructions, which have been built by our guidelines as well as the corresponding software such as program libraries and databases.
Taking in account the most recent development in the field of VR and the arising opportunities the Cyberneum was built in 2004 - 2005. The research focuses on the interaction of different senses, the impact of the spatial environment on behavior and the interaction of perception and action. Two separate halls, the Tracking Lab and the Robo Lab, each 15 x 12 meters large, shape the main area of the research building. In the Tracking Lab experiments dealing with the perception of space and the navigation achievements are carried out. Experimental subjects are allowed to move around freely in virtual worlds. Virtual surroundings are projected using so-called "Helmet Mounted Displays" (HMDs). The Robo Lab houses the first motion simulator worldwide based on a standard industrial robot, modified for perception experiments. With far more maneuvering room than customary simulators, the movement simulator allows a more detailed investigation of the influence of our sense of equilibrium on the perception of movement.

Why are we conducting our research in a Virtual Reality?
Research in a virtual reality (VR) makes it easier for us to maintain controllable and reproducible test surroundings. Real surroundings look quite different depending on the weather or the time of day. In VR all these conditions can be kept constant for an experiment. Every experimental subject sees precisely the same space or scene. Nevertheless, these conditions can also be specifically modified, should it be important for the experiment. Sometimes even experiments are carried out, which would not be possible in the real world or only with large efforts at one single place.
 
Here you can find some videos of our research in VR:
Opens external link in new windowhttp://www.youtube.com/user/MPIVideosProject

CableRobot-Simulator

CableRobot Simulator
The in-house developed novel CableRobot Simulator is capable of generating unique motion trajectories using - as its name suggests - a cable suspension system. This prototype will be used in perception and cognition research.

The robot is suspended on eight cables – each cable is driven by a powerful motor. In total, these motors hold the power of remarkable 473 PS. The strained simulator cabin consists of carbon fiber rods, which were made especially for this purpose. They can be arbitrarily controlled to either launch the cabin into a wild roller coaster ride using the entire space of 5 x 8 x 5 m³, or perform movements that cannot even be noticed by the passenger.
Its large workspace and dynamic capabilities make the simulator suitable for a wide spectrum of VR applications, including driving/flight simulation as well as investigation of basic perception processes in humans.
Opens external link in new windowhttp://www.cablerobotsimulator.org/
 
 

CyberMotion Simulator

MPI CyberMotion Simulator with cabin
open cabin
Sustained accelerations are usually simulated by use of motion cueing algorithms that involve washout filters. Using these algorithms, a fraction of the gravity vector generates the sensation of acceleration by an unperceived backward tilt of the cabin. A different solution to simulate sustained accelerations involves centrifugal force. If a subject is rotated continuously, the canal system adapts and the centrifugal force generates the perception of an ongoing acceleration. Both solutions require the subject to be seated in a closed cabin because the visual system would otherwise destroy the illusion.

Together with a local company (BEC, Reutlingen) the MPI developed a closed cabin for the MPS Cyber Motion Simulator. The cabin is equipped with a stereo projection and mounting possibilities for force feedback haptic devices like the Sensodrive steering wheel and the Wittenstein controls used for helicopter and flight simulation. To achieve continuous rotation around the base axis, the robot arm was modified by the manufacturer (KUKA robot AG, Germany). The robot was equipped with a different transmission and further mechanical modifications. Motor power and electrical control signals are transmitted to the robot by slip rings, an outer slip ring for power lines and an inner slip ring for high frequency signals. In its standard configuration, the cabin is attached to the robot flange from behind. In this configuration the subject faces outwards and it is possible to simulate constant deceleration by rotating the robot around its base axis. The 6 axes of the robot do not allow the possibility to place the subject facing inwards towards the base of the robot, so as to simulate constant acceleration. In order to achieve this possibility, the cabin was equipped with an actuated seventh axis. The C-shaped flange, along which the cabin axis can slide, provides the possibility to steer the robot into a position in which the cabin is attached to the robot flange from below. In this configuration, turning the last robot axis allows placing the subject towards the center of rotation that in turn grants the possibility to simulate a constant acceleration. The robot is equipped with its 6 axis controller and the 7th axis is equipped with a self developed separate controller. To achieve synchronized operation of the full 7 axis system, a combined control system was developed. This control system also monitors and supervises all safety devices and offers the possibility for manual and automated control of the MPI CyberMotion Simulator.
Opens external link in new windowhttp://www.cyberneum.de/facilities-research/cms.html
 
 

Stewart Motion Platform (MotionLab)

Reconstruction of the MotionLab
Foto: GEHIRN&GEIST/Manfred Zentsch
In the MotionLab we study the interplay between visual, auditory, vestibular and neuromuscular system. The center piece of the MotionLab is a hexapod Stewart platform (Cuesim) with six degrees of freedom. Mounted on the platform is a cabin with two interchangeable screens, a flat screen and a curved screen, both with a field of view of 86°×63°. The projector has a resolution of 1400×1050 with a refresh rate of 60 Hertz. Beneath the seat and foot plate are subwoofers which can be used to simulate high frequency vibrations in driving and flight simulators and mask the vibrations caused by the platform electric motors. The MotionLab was designed as a distributed system that is driven by multiple computers. Software control is based on an in-house development (xDevL) that can be used both with C++ and VirtoolsTM programs. The cabin has been equipped with 5 OptiTrack cameras (Natural Point Inc) that allow for tracking and capturing the operator’s body or limb motion.
 
The characteristics of the Stewart platform have been objectively measured with a standardized approach. It was found that the dynamic response of the platform is determined by the platform filters implemented by the manufacturer, the system time delay, and the noise characteristics of the actuators. These characteristics have been modeled and simulated on the SIMONA Research Simulator at Delft University of Technology. Experiments on the influence of these characteristics on human control behavior have shown that the limitations of the platform filters cause humans to rely predominantly on visual cues in a closed-loop target-following disturbance-rejection control task.
 
Furthermore, the MotionLab was used to study the effects of motion disturbances on the use of touch interfaces. When using a touch interface in a moving vehicle, the vehicle motion may induce involuntary limb motions which decrease manual control precision. The interference of motion disturbances on the use of touch interfaces was experimentally quantified. Participants performed a simple reach-and-touch task to interact with an iPad touch interface (Apple Inc) while being subjected to lateral and vertical motion disturbances. During the experiment, the motion disturbance, touch location, and trajectory of the hand were recorded. The goal of the study was to gain insight in the relationship between motion disturbances and touch errors.
 
The MotionLab is currently being upgraded with a state-of-the-art hexapod (Bosch Rexroth BV). Amongst other features, this hexapod has superior actuator technology, resulting in higher platform stiffness and increased motion smoothness.

Fixed-based flight simulator (HeliLab)

Multi-Panel Display
Schematic Overview of the System
Heli-Lab is a fixed-based flight simulator that affords a large field of view (i.e., 105°x100°). It is equipped to measure explicit and implicit behavioral responses — respectively, control stick inputs as well as eye-tracking and physiological measures. Thus, we are able to study the relationship between a pilot's actions and his cognitive workload during flight maneuvers.
 
The core system is an open-source flight simulator (FlightGear, www.flightgear.org) that accepts control inputs that are processed by a designated aircraft model to compute the appropriate world position and orientation of a modelled aircraft. Subsequently, these values are used to render the corresponding display of the world scene as seen from the cockpit, via a computing cluster for 10 wide-screen monitors.

Our system is equipped to record implicit behavioral responses. A remote eyetracking system (2 stereo-heads, 60 Hz; Facelab, Seeing Machines, USA) monitors the pilot's line-of-sight in the world scene as well as gaze on the heads-down instrument panel. Physiological measurements of the pilot are also recorded in tandem using a 16-channel active electrode system (g.Tec Medical Engineering GmbH, Austria). This system can be used to monitor the pilot's galvanic skin response, heart-rate variability and electro-encephalographic signals.

There are two control systems for the flight simulator that both feature generic helicopter controls such as a cyclic stick, a collective stick, and pedals. One system is unactuated and serves as any common joystick, while the other system consists of motorized controls (Wittenstein AG, Germany). This actuated system can be configured to resemble a wide range of control system dynamics, and can provide haptic feedback cues to the pilot. These cues can be used to support the pilot’s situational awareness.

The image here shows a schematic overview of the system (left). The information received by the user (red) and his control as well as physiological responses (blue) constitute part of this closed-loop system. A photo of the simulator in use (right) shows a participant performing a closed-loop control task (i.e., landing approach) while gaze is measured in real-time (inset).

Control Loading Lab

In the Control Loading Lab, we perform experimental evaluation to understand human behaviour in manual control tasks and to investigate novel approaches for human-machine interfaces. For this purpose, we use a fixed-base simulator with a control loaded sidestick, cyclic, collective and pedals from Wittenstein GmbH, Germany. These devices can simulate highly accurate control dynamics over a large frequency range and can be used to provide haptic feedback cues to the participant. The input devices are combined with a VIEWPixx display from VPixx Technologies, Canada, which can present stimuli at 120 Hz with accurate timing characteristics. Therefore, this lab provides an optimal environment for human-in-the-loop experiments.

Quarter-sphere Large Screen Projection (PanoLab)

We have initially employed a large screen, half-cylindrical virtual reality projection system to study human perception since 1997. Studies in a variety of areas have been carried out, including spatial cognition and the perceptual control of action. In 2005, we made a number of fundamental improvements to the virtual reality system. Perhaps the most noticeable change is an alteration of the screen size and geometry. This includes extending the screen horizontally (from 180 to 230 degrees) and adding a floor screen and projector. It is important to note that the projection screen curves smoothly from the wall projection to the floor projection, resulting in an overall screen geometry that can be described as a quarter- sphere. Vertically, the screen subtends 125 degrees (25 degree of visual angle upwards and 100 degrees downwards from the normal observation position).

In 2011 the image generation and projection setup was significantly updated. The existing four JVC SX21 DILA projectors (1400x1050) and curved mirrors were replaced with six EYEVIS LED DLP projectors (1920x1200), thereby simplifying the projection setup and increasing the overall resolution. In order to compensate for the visual distortions caused by the curved projection screen, as well as to achieve soft-edge blending for seamless overlap areas, we have developed a flexible warping solution using the new warp and blend features of the NVIDIA Quadro chipsets. This solution gives us the flexibility of a hardware-based warping solution and the accuracy of a software-based warping. The necessary calibration data for the image warping and blending stages is generated by a new camera-based projector auto-calibration system (DOMEPROJECTION.COM). Image generation is handled by a new high-end render cluster consisting of six client image generation PCs and one master PC. To avoid tearing artifacts resulting from the multi-projector setup, the rendering computers use frame-synchronized graphics cards to synchronize the projected images.

In addition to improving the visual aspects of the system, we increased the quality, number, and type of input devices. Participants in the experiments can, for example, interact with the virtual environment via actuated Wittenstein helicopter controls, joysticks, a space mouse, steering wheels, a Go-Kart, or a virtual bicycle (VRBike). Furthermore a Razer Hydra 6DOF joystick can be used for wand navigation and small volume tracking. Some of the input devices offer the possibility of force-feedback. With the VRBike, for example, one can actively pedal and steer through the virtual environment, and the virtual inertia and incline will be reflected in the pedals' resistance.
Opens external link in new windowhttp://www.cyberneum.de/research-facilities/panolab.html

 

Back-Projection Large Screen Display (BackproLab)

The quarter-sphere projection setup is complimented by a back-projection setup, which has the advantage that participants do not create shadows on the screen. This setup consists of a single SXGA+ projector (Christie Mirage S+3K DLP) and a large, flat screen (2.2m wide by 2m high). The projector has a high contrast ration of 1500:1 and can be used for mono or active stereo projections. This space previously used four Vicon V-series cameras for motion tracking. These cameras were replaced in 2014 with a SMARTTRACK system from Advanced Realtime Tracking (ART), which can track up to four rigid-body objects and can directly output the calculated positions and orientations via UDP network stream.


For stereo projection setup, the NVIDIA 3DVision Pro active shutter-glasses are used. These glasses use RF technology for synchronization and therefore can also be used in conjunction with the infrared-based optical tracking system. The glasses have been modified with markers for the optical tracking system and thus can be used for head tracking.

Large Tracking Hall (TrackingLab)

TrackingLab Foto: GEHIRN&GEIST/Manfred Zentsch
TrackingLab Foto: GEHIRN&GEIST/Manfred Zentsch
The free space walking and tracking laboratory in the Cyberneum is a large (12.7m x 11.9m x 6.9m) empty space equipped with 26 high-speed motion capture cameras from Vicon. In December 2014 the existing setup of 16 Vicon MX13 cameras (1.2 Megapixels) was expanded with 10 Vicon T160 cameras (16 Megapixels) to double the tracking volume for aerial robotics research and to improve head tracking quality for immersive VR research. This tracking system allows us to capture the motions of one or more persons by processing the images of configurations of multiple infrared reflective markers in real-time. The calculated position and orientation information can be transmitted wirelessly to a high-end mobile graphics system that updates the simulated virtual environment according to the person's position and is able to generate a correct egocentric visualization and/or auditory simulation.  Participants can navigate freely within the entire area of the tracking hall by either wearing the backpack or having the experimenter wear the backpack and follow them around in the walking room. In order to suppress any interference between real and simulated environment as far as possible, the laboratory is completely dark (black with the ability to block out all light) and acoustic panels around the walls largely reduce acoustic reverberations. The tracking setup also allows for tracking multiple objects such as flying quadcopters as well as for full-body motion capture (e.g. for analysis of sports performance, i.e. gymnastics, or for animation of virtual characters).
Last updated: Friday, 14.10.2016