Raman Butta

Alumni of the Department Human Perception, Cognition and Action

Main Focus

I am a Guest Scientist doing research under the supervision of Dr. Aamir Ahmad. My work includes :

  • Mapping of Unknown Dynamic Environments
  • Cooperative Localization of UAVs using Moving Baseline Stereo
  • Visual SLAM
  • Visual Odometry

My github username is : galaxyeagle (https://github.com/galaxyeagle)

My websites : galaxyeagle.github.io  ,  roboticscafe.wordpress.com


TOPIC :   COOPERATIVE LOCALISATION USING MOVING BASELINE STEREO

Localization is a key element of autonomous robotics. Any autonomous robot which changes its state over time must be able to estimate its state as a function of time in  a known map . A map is nothing but a visual representation of the set of all possible states under environmental constraints. Localization implies highlighting the current state of the robot in the map. For a quadcopter, this means to estimate the 6 DOF pose of the copter at any time in a map of the environment. A Localization+Mapping approach therefore becomes the choice for unknown and dynamic outdoor environments, where we have a plain screen which does not have a map beforehand.
In a multirobot scenario cooperative localization aims to complement perception abilities of individual robot in order to achieve global understanding of the world. One approach for such a localization scheme involves cooperatively inferring the 3D positions of static environmental features through moving baseline stereo vision involving more than one mobile or flying robot. Static baseline stereo is a common approach where several monocular cameras (typically 2) are rigidly attached to the same base frame. Moving baseline refers to a situation where each monocular camera is on a separate moving frame.

In this internship, I used the Robot Operating System (ROS) to test various packages for SLAM and configuring it for the Telekyb framework of MPI-KYB . I initially used the Hector_Slam package which uses a Lidar and detects its movement for on line localization.

This created 2D occupancy grid-maps, however lacked the feature for online localization. Further it was limited by 2D mapping and the short 5 mtr range of the Hokuyo Lidar used.

So, next I tried to use images to perform SLAM for our quadcopters.

I tested very extensively on the tum_ardrone package which subscribes to monocular images from a single camera, and Navdata, and generates the on-line pose and feature maps of the environment using a Parallel Tracking and mapping (PTAM) approach as developed by Murray and Klein.
The results were tested in simulation.
Also the real time rqt_graph shows a relation between the different nodes and topics being subscribed and published.
I also  tested a Visual Odometry package libviso2 which has wrappers for both ROS and Matlab.
I made the telekyb topics subscribe to libviso2.
The libviso has a number of advantages as well, some of which I found out are :
1. Complete localization, tracking and Dense 3D reconstruction
2. No IMU data needed, only camera required :)
3. No depth data needed.
4. Works with both monocular and stereo systems
5. Generates pose, odometry and also depth map( for stereo configuration)

A similar package which gave excellent results is rtabmap_ros which is actually a ROS Wrapper of RTAB_MAP library which supports RGB-D SLAM based on a global loop closure detector with real-time constraints. This package can be used to generate a 3D point clouds of the environment and/or to create a 2D occupancy grid map for navigation. Ensuring that it works properly with the Asus devices mounted on our quadcopters, clear maps were also created for the same.

A possible challenge here includes proper configuration of the /odom frame. But the advantage lies in its switched modes of localization and mapping as well as saving and loading maps according to the situation.

Thus from analysis of several ROS and PCL implementations, the following points can be drawn about Visual Odometry and Visual SLAM :

1. Feature Matching between successive frames of an image sequence taken by a monocular or stereo camera system.
2. Matching 2D features from two successive frames. For Visual Odometry however, we can also use use a feature tracking approach wherein corresponding features are tracked using least squares minimization. Here is also where the RANSAC based outlier rejection scheme is used.
3. Then we need to find the transformations between frames F(k) and F(k-1) for all k
4. This motion estimation can be 2D to 2D, 3D to 3D or 3D to 2D.
5. Finally we may have Offline Bundle Adjustment if necessary.

6. For SLAM, loop closure is an indispensable step.

Note : Further details about this project can be found at my github page or my website, the details of which can be found at the top.

Curriculum Vitae

My CV can be found 

Go to Editor View