Björn Browatzki

Alumni of the Department Human Perception, Cognition and Action
Alumni of the Group Cognition and Control in Human-Machine Systems

Main Focus

Research Group:
Supervisor:

I'm working on multisensory object recognition approaches for robotics. In a joint project between the Fraunhofer Society and the Max Planck Society, funded by the Centre for Integrative Neuroscience (CIN), we combine 2D and 3D visual information to classify unknown objects. 3D shape data is retrieved from a time-of-flight range sensor and combined with 2D appearance information retrieved from color cameras.

Since humans rely on many cues beyond visual information to represent objects we want to investigate how various modalities can be integrated to multimodal object representations. In a collaboration with the Italian Institute of Technology in Genoa we are implementing recognition algorithms on a humanoid robot using vision, haptics and proprioception.

CIN 2D+3D object classification dataset

This dataset contains segmented color and depth images of objects from 18 categories of common household and office objects. Each object was recorded using a high-resolution color camera and a time-of-flight range sensor. Objects were rotated using a turn table and snapshots taken every 20 degrees.

(396MB)

Loading

Color and 3D data is stored in a 3-channel PNG image each. To load a view with C++ and OpenCV use the following lines:

cv::Mat colorImage = cv::imread(FILENAME_COLOR, -1);
cv::Mat xyzImage_16U3 = cv::imread(FILENAME_XYZ, -1);
cv::Mat xyzImage;
xyzImage_16U3.convertTo(xyzImage, CV_32FC3);
cv::split(xyzImage, channels);
channels[0] -= 1000.0;
channels[1] -= 1000.0;
cv::merge(channels, xyzImage);

Multi-sensory object perception for robotics

Introduction

Humans rely on many cues beyond visual information to represent objects and interact with the environment. Already very early on we see that infants probe and explore the world using all available senses. Computational approaches to object recognition, however, often neglect these additional sources of information. 

Goals

In collaboration with European robotic research institutes we investigate how various modalities can be integrated into multi-sensory, computational object perception. We are working on approaches that combine visual, proprioceptive and haptic information to form multi-modal object representations that capture the wide range of cues that is available to us.

Methods

In joint work with the Fraunhofer IPA in Stuttgart we combine two-dimensional image data with three-dimensional information retrieved from range sensing devices. Both channels are fused in order to recognize and classify unknown three-dimensional objects in real-world scenarios.
In collaboration with the Italian Institute of Technology in Genoa we are implementing active methods for perception-driven, in-hand object recognition on a humanoid robot. Besides visual input these strategies take into account proprioceptive information obtained from sensors in the robot’s body.

Robot platforms for imlementation and evaluation. Left: Care-O-bot® Fraunhofer IPA, Stuttgart, Germany. Right: iCub, Italian Institute of Technology, Genoa, Italy.

Initial results

We have extensively studied the combination of two-dimensional appearance based object representations with three-dimensional shape information. Results show clearly that joining both modalities leads to a significant improvement in object classification performance [1].
Furthermore, on experiments with a humanoid robot, we could demonstrate that by using a multi-sensory, perception-driven exploration strategy, objects are recognized much faster, more accurately, and more robustly than by random, vision-only examination [2].

Initial conclusion

As important as multi-sensory integration is for human perception, we see that sensor-fusion also is a crucial requisite for the development of perceptual skills for cognitive robots. Further research needs to go in this direction to enable robotic systems to operate in a human centered environment with the flexibility and dexterity that is natural to us.

References
1.\tBrowatzki B., Fischer J, Graf B., Bülthoff H.H, Wallraven C. (2011) Going into depth: Evaluating 2D and 3D cues for object classification on a new, large-scale object dataset, 1st IEEE Workshop on Consumer Depth Cameras for Computer Vision.
2.\tBrowatzki B., Tikhanoff V., Metta G., Bülthoff H.H, Wallraven C. Active Object Recognition on a Humanoid Robot.

Curriculum Vitae

since 2009
PhD student at Max Planck Institute for Biological Cybernetics
2008 Diploma thesis at Max Planck Institute for Biological Cybernetics
2007
Internship at Siemens VDO, Regensburg
2004 - 2007 Software developer at Altasoft GmbH, Stuttgart
2002 - 2008 Studied Software Engineering at the University of Stuttgart. Diploma in Computer Science.
2002 Abitur, Schickhardt-Gymnasium Herrenberg
Go to Editor View