Looking for Participants

The MPI for Biological Cybernetics is looking for participants for some of their research experiments [more].
 

Most recent Publications

Zaretskaya N, Fischl B, Reuter M, Renvall V and Polimeni JR (January-2018) Advantages of cortical surface reconstruction using submillimeter 7 T MEMPRAGE NeuroImage 165 11-26.
Mölbert SC, Klein L, Thaler A, Mohler BJ, Brozzo C, Martus P, Karnath H-, Zipfel S and Giel KE (November-2017) Depictive and metric body size estimation in anorexia nervosa and bulimia nervosa: A systematic review and meta-analysis Clinical Psychology Review 57 21–31.
Avdievich N, Pfrommer A, Giapitzakis IA and Henninmg A (October-2017) Analytical modeling provides new insight into complex mutual coupling between surface loops at ultrahigh fields NMR in Biomedicine 30(10) 1-13.
Bailey DL, Pichler BJ, Gückel B, Antoch G, Barthel H, Bhujwalla ZM, Biskup S, Biswal S, Bitzer M, Boellaard R, Braren RF, Brendle C, Brindle K, Chiti A, la Fougère C, Gillies R, Goh V, Goyen M, Hacker M, Heukamp L, Knudsen GM, Krackhardt AL, Law I, Morris CJ, Nikolaou K, Nuyts J, Ordonez AA, Pantel K, Quick HH, Riklund K, Sabri O, Sattler B, Troost EGC, Zaiss M, Zender L and Beyer T (October-2017) Combined PET/MRI: Global Warming: Summary Report of the 6th International Workshop on PET/MRI, March 27–29, 2017, Tübingen, Germany Molecular Imaging and Biology Epub ahead.
Chadzynski GL, Bause J, Shajan G, Pohmann R, Scheffler K and Ehses P (October-2017) Fast and efficient free induction decay MR spectroscopic imaging of the human brain at 9.4 Tesla Magnetic Resonance in Medicine 78(4) 1281–1295.

 

VideoLab

The VideoLab has been in use since May 2002. It was designed for recordings of human activities from several viewpoints. It currently consists of 5 Basler CCD cameras that can record precisely synchronized color videos. In addition, the system is equipped for synchronized audio recording. The whole system was built from off-the-shelf components and was designed for maximum flexibility and extendibility. The VideoLab has been used to create several databases of facial expressions as well as  action units and was instrumental to several projects on recognition of facial expressions and gestures across viewpoints, multi-modal action learning and recognition, as well as in investigations of visual-auditory interactions in communication.
One of the major challenges in creating the VideoLab consisted of choosing hardware components allowing for on-line, synchronized recording of uncompressed video streams. Each of the 6 Basler A302 bc digital video cameras produces up to 26 MB per second (782 × 582 pixels at 60 frames per second) – date that is continuously written onto the hard disk. Currently, the computers have a striped RAID-0 configuration with a total disk capacity of 200 GB each to maximize writing speed. The computers are connected and controlled via a standard Ethernet- LAN.
One of the major challenges in creating the VideoLab was the choice of hardware components allowing for on-line, synchronized recording of uncompressed video streams. Each of the 5 Basler A302bc digital video cameras produces up to 26MB per second (782x582 pixels at 60 frames per second) – data that is continuously written onto dedicated hard disks using CameraLink interfaces. Currently, the computers have a striped RAID-0 configuration to maximize writing speed. The computers are connected and controlled via a standard Ethernet-LAN, synchronization at the microsecond level is achieved using external hardware triggering.
For precise control of multi-camera recordings, we developed our own distributed recording software. In addition to the frame grabber drivers, which provide basic recording functionality on a customized Linux operating system, we programmed a collection of distributed, multi-threaded real-time C programs that handle control of the hardware, as well as buffering and write-out of the video- and audio data to hard discs. All software components communicate with each other via standard Ethernet-LAN. On top of this low-level control software, we have implemented a graphical user interface to access the whole functionality of the VideoLab using Matlab and the open-source PsychToolBox-3 software as framework
This image shows an example recording of the VideoLab (Facial expression recorded from 6 synchronized viewpoints)

Interactive Facial Animation System

We have developed a novel facial expression animation control system that realizes a real-time version of the animation pipeline developed in our institute. The real-time computer vision components, such as marker detection and 3D reconstruction extended our VideoLab system by several features. The device produces no noticeable feedback latencies for subjects between executing dynamic expressions and the perception of themselves animated, in e.g. a mirror-setup. Expressions are encoded currently by means of 3D Facial Action Units. This novel real-time visual feedback offers a variety of on-line dynamic manipulations. For non-experienced programmers an intuitive application programming interface (API) based on the public ‘Visual Psychophysics Toolbox Version3’ offers implementation of experimental protocols for facial movement analysis and graphical feedback by control of gains of Action-Unit signals, the exchange and delay of Action-Unit-components, and allow highly synchronized monitoring of multimodal physiological recordings of e.g. facial EMG, EEG and sound.

Last updated: Friday, 14.10.2016