Looking for Participants

The MPI for Biological Cybernetics is looking for participants for some of their research experiments [more].
 

Most recent Publications

Göksu C, Hanson LG, Siebner HR, Ehses P, Scheffler K and Thielscher A (May-2018) Human in-vivo brain magnetic resonance current density imaging (MRCDI) NeuroImage 171 26-39.
Celicanin Z, Manasseh G, Petrusca L, Scheffler K, Auboiroux V, Crowe LA, Hyacinthe JN, Natsuaki Y, Santini F, Becker CD, Terraz S, Bieri O and Salomir R (May-2018) Hybrid ultrasound-MR guided HIFU treatment method with 3D motion compensation Magnetic Resonance in Medicine 79(5) 2511-2523.
Schindler A and Bartels A (May-2018) Integration of visual and non-visual self-motion cues during voluntary head movements in the human brain NeuroImage 172 597-607.
Pracht ED, Feiweier T, Ehses P, Brenner D, Roebroeck A, Weber B and Stöcker T (May-2018) SAR and scan-time optimized 3D whole-brain double inversion recovery imaging at 7T Magnetic Resonance in Medicine 79(5) 2620–2628.
Dobs K, Schultz J, Bülthoff I and Gardner JL (May-2018) Task-dependent enhancement of facial expression and identity representations in human cortex NeuroImage 172 689-702.

 

VideoLab

The VideoLab has been in use since May 2002. It was designed for recordings of human activities from several viewpoints. It currently consists of 5 Basler CCD cameras that can record precisely synchronized color videos. In addition, the system is equipped for synchronized audio recording. The whole system was built from off-the-shelf components and was designed for maximum flexibility and extendibility. The VideoLab has been used to create several databases of facial expressions as well as  action units and was instrumental to several projects on recognition of facial expressions and gestures across viewpoints, multi-modal action learning and recognition, as well as in investigations of visual-auditory interactions in communication.
One of the major challenges in creating the VideoLab consisted of choosing hardware components allowing for on-line, synchronized recording of uncompressed video streams. Each of the 6 Basler A302 bc digital video cameras produces up to 26 MB per second (782 × 582 pixels at 60 frames per second) – date that is continuously written onto the hard disk. Currently, the computers have a striped RAID-0 configuration with a total disk capacity of 200 GB each to maximize writing speed. The computers are connected and controlled via a standard Ethernet- LAN.
One of the major challenges in creating the VideoLab was the choice of hardware components allowing for on-line, synchronized recording of uncompressed video streams. Each of the 5 Basler A302bc digital video cameras produces up to 26MB per second (782x582 pixels at 60 frames per second) – data that is continuously written onto dedicated hard disks using CameraLink interfaces. Currently, the computers have a striped RAID-0 configuration to maximize writing speed. The computers are connected and controlled via a standard Ethernet-LAN, synchronization at the microsecond level is achieved using external hardware triggering.
For precise control of multi-camera recordings, we developed our own distributed recording software. In addition to the frame grabber drivers, which provide basic recording functionality on a customized Linux operating system, we programmed a collection of distributed, multi-threaded real-time C programs that handle control of the hardware, as well as buffering and write-out of the video- and audio data to hard discs. All software components communicate with each other via standard Ethernet-LAN. On top of this low-level control software, we have implemented a graphical user interface to access the whole functionality of the VideoLab using Matlab and the open-source PsychToolBox-3 software as framework
This image shows an example recording of the VideoLab (Facial expression recorded from 6 synchronized viewpoints)

Interactive Facial Animation System

We have developed a novel facial expression animation control system that realizes a real-time version of the animation pipeline developed in our institute. The real-time computer vision components, such as marker detection and 3D reconstruction extended our VideoLab system by several features. The device produces no noticeable feedback latencies for subjects between executing dynamic expressions and the perception of themselves animated, in e.g. a mirror-setup. Expressions are encoded currently by means of 3D Facial Action Units. This novel real-time visual feedback offers a variety of on-line dynamic manipulations. For non-experienced programmers an intuitive application programming interface (API) based on the public ‘Visual Psychophysics Toolbox Version3’ offers implementation of experimental protocols for facial movement analysis and graphical feedback by control of gains of Action-Unit signals, the exchange and delay of Action-Unit-components, and allow highly synchronized monitoring of multimodal physiological recordings of e.g. facial EMG, EEG and sound.

Last updated: Friday, 14.10.2016