Looking for Participants

The MPI for Biological Cybernetics is looking for participants for some of their research experiments [more].
 

Most recent Publications

Guest JM, Seetharama MM, Wendel ES, Strick PL and Oberlaender M (January-2018) 3D reconstruction and standardization of the rat facial nucleus for precise mapping of vibrissal motor networks Neuroscience 368 171-186.
Zaretskaya N, Fischl B, Reuter M, Renvall V and Polimeni JR (January-2018) Advantages of cortical surface reconstruction using submillimeter 7 T MEMPRAGE NeuroImage 165 11-26.
Ardhapure AV, Sanghvi YS, Borozdina Y, Kapdi AR and Schulzke C (January-2018) Crystal structure of 8-(4-methyl­phen­yl)-2′-de­oxy­adenosine hemihydrate Acta Crystallographica Section E: Crystallographic Communications 74(1) 1-5.
Meilinger T, Garsoffky B and Schwan S (December-2017) A catch-up illusion arising from a distance-dependent perception bias in judging relative movement Scientific Reports 7(17037) 1-9.
Venrooij J, Mulder M, Mulder M, Abbink DA, van Paassen MM, van der Helm FCT and Bülthoff HH (December-2017) Admittance-Adaptive Model-Based Approach to Mitigate Biodynamic Feedthrough IEEE Transactions on Cybernetics 47(12) 4169-4181.
pdf

 

VideoLab

The VideoLab has been in use since May 2002. It was designed for recordings of human activities from several viewpoints. It currently consists of 5 Basler CCD cameras that can record precisely synchronized color videos. In addition, the system is equipped for synchronized audio recording. The whole system was built from off-the-shelf components and was designed for maximum flexibility and extendibility. The VideoLab has been used to create several databases of facial expressions as well as  action units and was instrumental to several projects on recognition of facial expressions and gestures across viewpoints, multi-modal action learning and recognition, as well as in investigations of visual-auditory interactions in communication.
One of the major challenges in creating the VideoLab consisted of choosing hardware components allowing for on-line, synchronized recording of uncompressed video streams. Each of the 6 Basler A302 bc digital video cameras produces up to 26 MB per second (782 × 582 pixels at 60 frames per second) – date that is continuously written onto the hard disk. Currently, the computers have a striped RAID-0 configuration with a total disk capacity of 200 GB each to maximize writing speed. The computers are connected and controlled via a standard Ethernet- LAN.
One of the major challenges in creating the VideoLab was the choice of hardware components allowing for on-line, synchronized recording of uncompressed video streams. Each of the 5 Basler A302bc digital video cameras produces up to 26MB per second (782x582 pixels at 60 frames per second) – data that is continuously written onto dedicated hard disks using CameraLink interfaces. Currently, the computers have a striped RAID-0 configuration to maximize writing speed. The computers are connected and controlled via a standard Ethernet-LAN, synchronization at the microsecond level is achieved using external hardware triggering.
For precise control of multi-camera recordings, we developed our own distributed recording software. In addition to the frame grabber drivers, which provide basic recording functionality on a customized Linux operating system, we programmed a collection of distributed, multi-threaded real-time C programs that handle control of the hardware, as well as buffering and write-out of the video- and audio data to hard discs. All software components communicate with each other via standard Ethernet-LAN. On top of this low-level control software, we have implemented a graphical user interface to access the whole functionality of the VideoLab using Matlab and the open-source PsychToolBox-3 software as framework
This image shows an example recording of the VideoLab (Facial expression recorded from 6 synchronized viewpoints)

Interactive Facial Animation System

We have developed a novel facial expression animation control system that realizes a real-time version of the animation pipeline developed in our institute. The real-time computer vision components, such as marker detection and 3D reconstruction extended our VideoLab system by several features. The device produces no noticeable feedback latencies for subjects between executing dynamic expressions and the perception of themselves animated, in e.g. a mirror-setup. Expressions are encoded currently by means of 3D Facial Action Units. This novel real-time visual feedback offers a variety of on-line dynamic manipulations. For non-experienced programmers an intuitive application programming interface (API) based on the public ‘Visual Psychophysics Toolbox Version3’ offers implementation of experimental protocols for facial movement analysis and graphical feedback by control of gains of Action-Unit signals, the exchange and delay of Action-Unit-components, and allow highly synchronized monitoring of multimodal physiological recordings of e.g. facial EMG, EEG and sound.

Last updated: Friday, 14.10.2016