Looking for Participants

The MPI for Biological Cybernetics is looking for participants for some of their research experiments [more].

Most recent Publications

Göksu C, Hanson LG, Siebner HR, Ehses P, Scheffler K and Thielscher A (May-2018) Human in-vivo brain magnetic resonance current density imaging (MRCDI) NeuroImage 171 26-39.
Celicanin Z, Manasseh G, Petrusca L, Scheffler K, Auboiroux V, Crowe LA, Hyacinthe JN, Natsuaki Y, Santini F, Becker CD, Terraz S, Bieri O and Salomir R (May-2018) Hybrid ultrasound-MR guided HIFU treatment method with 3D motion compensation Magnetic Resonance in Medicine 79(5) 2511-2523.
Schindler A and Bartels A (May-2018) Integration of visual and non-visual self-motion cues during voluntary head movements in the human brain NeuroImage 172 597-607.
Pracht ED, Feiweier T, Ehses P, Brenner D, Roebroeck A, Weber B and Stöcker T (May-2018) SAR and scan-time optimized 3D whole-brain double inversion recovery imaging at 7T Magnetic Resonance in Medicine 79(5) 2620–2628.
Dobs K, Schultz J, Bülthoff I and Gardner JL (May-2018) Task-dependent enhancement of facial expression and identity representations in human cortex NeuroImage 172 689-702.


Body Motion Capture and Animation Technology

For full-body motion capture we use two different setups. One setup consists of a leightweight lycra suit with attached reflective markers which are captured and processed with Vicon iQ or Blade software. After the capturing process, the data can be post-processed for the desired need. The second setup is used for real-time motion capture and animation. For this we use two Xsens MVN Suits, consisting of 17 MTx inertial measurement units each. The MVN system software was upgraded in 2014 to MVN Biomech which allows for synchronized 60fps video recordings with an external camera. It also offers integrated data visualization of joint angles and accelerations as well as direct export into game engines like Unity. Custom-built plugins enable the use of these suits for real-time animation (e.g. of virtual avatars). For post-processing and animation of body parts or the full body we use Autodesk 3ds Max, Autodesk Maya and Autodesk Motion Builder. The avatars generally used for experiments involving motion capture and animation are part of the Rocketbox Studios GmbH Complete Characters Library. Through the work of the independent research group, Space and Body Perception (Betty Mohler) in collaboration with the Max Planck Institute for Intelligent Systems (Michael Black) we can now create personalized self-avatars for our experiments. We currently do so for patient populations where we investigate distortions in body image.

Last updated: Friday, 14.10.2016