Paolo Robuffo Giordano

Alumni of the Department Human Perception, Cognition and Action
Alumni of the Group Autonomous Robotics and Human-Machine Systems

Main Focus


  • 14/5/2012: together with (MPI), (Univ. Modena and Reggio Emilia) and (Univ. Siena), I am organizing the ICRA'12 workshop "", in St. Paul, USA. Everyone is welcome!!
  • 23/4/2012-6/5/2012:Lecturer of the module "" within the course "" 2011/2012, Dipartimento di Ingegneria Informatica, Automatica e Gestionale, Sapienza University of Rome
  • 4/2012-7/2012:Lecturer of the course "", Institute for Systems Theory and Automatic Control, University of Stuttgart
  • 14/10/2011: Call for papers: International Journal of Robotics Research. Special issue: Autonomous Physical Human-Robot Interaction
  • 1/7/2011: RSS Workshop on ""
  • 04/2011-06/2011: Lecturer of the course "", Institute for Systems Theory and Automatic Control, University of Stuttgart
  • 28-29/10/2010: the  was succesfully held at our Max Planck Institute. We had about 45 attendees, 17 scientific talks and 6 keynote talks. The speakers were from 7 different European countries and shared their view on Human-Robot interaction with the audience
  • 13/5/2010: together with and , I'm organizing the ICRA'11 workshop "" in Shanghai, China. Everyone interested is welcome to attend. The list of speakers can be found .
  • 29/4/2010: the IEEE Spectrum Magazine published an on our work on the motion control of the platform, a large planar locomotion device used to explore virtual worlds. Details can be found in this (a journal version is under submission).
  • 11/2/2010: submitted the final papers to on our robot-based Motion Simulator used to reproduce motion of a Ferrari Formula 1 car. A video of a lap on virtual Monza is .
  • 16/5/2009: Best Video Award. Together with my former colleagues at , I was awarded for this . My contribution was the modeling and control of the 4-wheeled platform carrying Justin. Details can be found in my ICRA’09 paper “On the Kinematic Modeling and Control of a Mobile Platform Equipped with Steering Wheels and Movable Legs”.


I am leading the within the Human Perception, Cognition and Action Department of the Max Planck Institute for Biological Cybernetics. for more general information about our research, and to our for a collcetion of videos.

Short summary

I studied Computer Science Engineering and then obtained a PhD in Systems Theory and Robotics. Widely speaking, my interests have always been

  • modeling from first principles (physics, mechanics, etc.) the behavior of dynamical systems, and studying their structural properties to get insights
  • implementing realistic software simulations of these systems
  • designing control laws that can achieve desired goals for the given models
  • testing and validating all the theoretical claims (both on the modeling and on the control design sides) in the real world

In most cases, the system under study is a physical system that must acquire information from the environment, process them, and then use them to autonomously perform a task. Such a machine is usually known as robot, even though its design may be far away from the popular picture of human-like beings that try to overcome mankind. A car driving autonomously, an airplane with attitude stabilization, a swarm of miniature vehicles exploring an unknown environment are all examples of robots in my mind. The key point here is autonomy: the ability to sense the world, to reason about it, and to autonomously act to achieve some goals.

A big challenge is to bring robots and humans together and have them effectively “cooperate”. From being confined in factories, robots should finally get into our everyday’s life. To realize this vision the role of humans must be taken into account. Robot design and control must be conceived to meet the human needs and to facilitate the interaction. A big research effort should then be devoted to characterize the “human-in-the-loop”: how humans act in a closed-loop fashion when interacting with a (semi-)autonomous machine to fulfill a common goal. This, in general, would require understanding (not at all an exhaustive list):

  • how humans process the sensory information
  • how humans generate motor commands
  • how humans adapt to new situations and learn new skills
  • what is the best physical and cognitive interface between humans and robots
  • what is the best way of presenting information in order to maximize humans’ situational awareness

In my research, I try to address these problems from an engineering point of view. In particular, I would like to use the tools of robotics, system and control theory to model the human-in-the-loop, and to use this knowledge in designing a new generation of autonomous systems that will effectively cooperate with humans.


The aim of my research (see also my ) is to study novel ways to interface humans with robots, i.e., autonomous machines that are able to sense the environment, reason about it, and take actions to perform some tasks. These efforts are guided by the accepted vision that in the future humans and robots will seamlessly cooperate in shared or remote spaces, thus becoming an integral part of our daily life. For instance, robots are expected to relieve us from monotonous and physically demanding work in industrial settings, or help humans in dealing with complex/dangerous tasks, thus augmenting their capabilities. In all the cases of human-robot interaction, it is interesting to study what is the best level of autonomy expected in the robots, and what is the best sensory feedback needed by the humans to take an effective role in the interaction. For instance, in order to exploit their superior cognitive skills, humans should not be overloaded with the execution of many local and low-level tasks. Robots, on the other hand, should be exploited because of their versatility, reliability, motion accuracy, specialization, and task execution speed.

This research group addresses these challenges from an engineering/computer science point of view: our focus in mainly on (i) how to empower robots with the needed autonomy in order to facilitate the interaction with the human side for accomplishing some shared task, and (ii) how to allow a human user to effectively be in control of a robot(s) while performing a task. To this end, we mainly rely on the tools of robotics, systems and control theory, computer vision, and psychophysics.


In a first line of research, we considered the problem of realizing an ideal telepresence system for a human user. Such a system should reproduce the full multisensory flow of information that humans experience through their senses: vision, haptics, hearing, vestibular (self-motion) information, and even smell and taste. While visual and haptics channels have traditionally been exploited in, e.g., many teleoperation settings, little or no attention has been paid to the use of the vestibular channel, i.e., the perception of self linear/angular motion. A central part of our research is devoted to the use of a robotic arm as a motion platform (the so-called CyberMotion simulator) in order to provide vestibular  (self-motion) cues to a human pilot when controlling the motion of a simulated or real vehicle.

To this end, we developed novel motion algorithms to exploit an anthropomorphic robot arm as a motion simulator. This was further extended to take into account the simulator actuated cabin which, thanks to its actuation, introduces an extra degree of freedom to the robot actuation system. The CyberMotion simulator was both used to reproduce the feeling of a simulated race car, and to allow a human user to guide and perceive the visual/vestibular motion of a real quadrotor UAV.

As a second line of research, we thoroughly investigated the theoretical foundations which allow establishing a bilateral teleoperation channel between a single human operator (master-side) and a group of multiple remote robots (slave-side). Multi-robot systems possess several advantages w.r.t. single robots, e.g., higher performance in simultaneous spatial domain coverage, better affordability as compared to a single/bulky system, robustness against single point failures. In our envisioned scenario, the multi-robot system should possess some level of local autonomy and act as a group, e.g., by maintaining a desired formation, by avoiding obstacles, and by performing additional local tasks. At the same time, the remote human operator should be in control of the overall robot motion and receive, through haptic feedback, suitable cues informative enough of the remote robot/environment state. We addressed two distinct possibilities for the human/multi-robot teleoperation: a top-down approach, and a bottom-up approach, mainly differing in the way the local robot interactions and desired formation shape are treated.

In the top-down approach, the robots in the group are abstracted as simple virtual points (VPs) in space teleoperated by the remote human user. The robots collectively move as a deformable flying object, whose shape (chosen beforehand) autonomously deforms, rotates and translates reacting to the presence of obstacles (to avoid them), and the operator commands (to follow them). The operator receives a haptic feedback informing him about the motion state of the robots, and about the presence of obstacles. As a proof of concept, we ran experiments with 3 quadrotor UAVs by only using relative bearings as a source of information.

In the bottom-up case, the remote human user teleoperates a single leader, while the remaining follower motion is determined by local interactions (modeled as spring/damper couplings) among themselves and the leader, and repulsive interactions with the obstacles. The overall formation shape is not chosen beforehand but is a result of the robot motion. Arbitrary split and rejoin decisions are allowed depending on any criterion, e.g., the robot relative distance and their relative visibility. The operator receives a haptic feedback informing him about the motion state of the leader which is also influenced by the motion of its followers and their interaction with the obstacles. Further extensions of this line of research involved: possibility to maintain the group connectivity in a decentralized way during the motion, and experiments involving 4 quadrotor UAVs and an additional strategy to allow for velocity synchronization among the robots. Finally, our group also addressed preliminary psychophysical evaluations aimed at assessing the human perceptual awareness and maneuverability in the teleoperation of a group of mobile robots.

Expected impact

The presented research efforts are aimed at improving in a significant way the quality and easiness of human-robot interaction, in the specific case of a single human user in control of multiple autonomous robots. This will have implications in all those tasks envisaged for robots in the near future such as search and rescue operations, remote inspections of inaccessible sites, remote manipulation in dangerous conditions, and environmental monitoring. In a longer-term perspective, our results could be relevant for more futuristic applications as, for example, teams of nano-robots inspecting the human body from inside, releasing treatments, and performing local micro-surgery in situ.

Curriculum Vitae

I was born in Roma, Italy, in 1977.


    • After completing the scientific high school (Liceo) in 1995, I started a 5-year Computer Science Engineering course at the University of Roma “La Sapienza”

    • In 2001 I received the "Laurea" degree (MSc) in Computer Science Engineering from the University of Roma "La Sapienza" (final mark: 110/110 cum laude)

    • In 2004 I obtained a grant for a 3-year Ph.D. course in System Engineering at the University of Roma "La Sapienza" under the supervision of Prof.

    • In 2008 I received the PhD in System Engineering from the University of Roma "La Sapienza"

Professional experience

    • 02. 2002 – 06.2002: I was hired by "La Sapienza" to work on the visual interception of moving objects for a wheeled mobile robot

    • 07. 2002 – 09.2003: I worked for an Italian space company () on the modeling and control of , a launcher for satellites

    • 09.2003 – 08.2004: I moved to where I focused on the development and testing of real-time code for helicopters

    • 11.07 – 09.08: Post-Doc at the Institute of Robotics and Mechatronics of the German Space Agency () headed by Prof. Dr. Gerhard Hirzinger

    • 10.08 – present: Project Leader of the at the Max Planck Insitute for Biological Cybernetics, Department of Human Perception, Cognition and Action (Prof. Dr. Heinrich H. Bülthoff)

Service to the scientific community

Reviewer of the following peer-reviewed journal/conferences

    • IEEE Transactions on Robotics
    • The International Journal of Robotics Research
    • IEEE/ASME Transactions on Mechatronics
    • IEEE Transactions on Control Systems Technology
    • International Journal of Robotics and Automation
    • IEEE International Conference on Robotics and Automation
    • IEEE/RSJ International Conference on Intelligent Robots and Systems
    • IEEE Conference on Decision and Control


  • 2006 - research grant for foreign researchers working in Germany
Go to Editor View