Event Calendar:

September 2014

Mo Tu We Th Fr Sa Su
36 01 02 03 04 05 06 07
37 08 09 10 11 12 13 14
38 15 16 17 18 19 20 21
39 22 23 24 25 26 27 28
40 29 30 01 02 03 04 05

» All Events

Project Leader

Jan Peters, Dr. (USC)
Jan Peters, Dr. (USC)
Phone: +49 7071 601-585
Fax: +49 7071 601-552
jan.peters[at]tuebingen.mpg.de

More about

More information and videos can be found at http://www.robot-learning.de.

 

Robot Learning

Robot and human playing table tennis
Robot and human playing table tennis
Creating autonomous robots that can learn to assist humans in situations of daily life is a fascinating challenge for machine learning. While this aim has been a long-standing vision of artificial intelligence and the cognitive sciences, we have yet to achieve the first step of creating robots that can learn to accomplish many different tasks triggered by environmental context or higher-level instruction. The goal of our robot learning laboratory is the investigation of the ingredients for such a general approach to motor skill learning, to get closer towards human-like performance in robotics. We thus focus on the solution of basic problems in robotics while developing domainappropriate machine-learning methods.

Starting from theoretically well-founded approaches to representing the required control structures for task representation and execution, we replace the analytically derived modules by more flexible, learned ones.

An essential problem in robotics is the accurate execution of desired movements using only low-gain controls such that the robot will accomplish the desired task while not harming human beings in its environment. Following a trajectory with little feedback requires the accurate prediction of the needed torques, which cannot be achieved using classical methods for sufficiently complex robots. However, learning such models is hard as the joint-space can never be fully explored and the learning algorithm has to cope with a never-ending data stream in real time. We have developed learning methods both for accomplishing tasks represented in operational space as well as in joint-space.
Ball in a cup
Ball in a cup
While learning to execute tasks is a component essential to a framework for motor skill learning, learning the actual task is of even higher importance. Here, we focus on the learning of elementary tasks or movement primitives, which are parameterized task representations based on nonlinear differential equations with desired attractor properties. We mimic how children learn new motor tasks using imitation learning for initializing these movement primitives while employing reinforcement learning to subsequently improve the task performance. We have learned tasks such as Ball-in-a-Cup or bouncing a ball on a string using this approach.

Efficient reinforcement learning for continuous states and actions is essential for robotics and control. We follow two approaches depending on the dimensionality of the domain. For high-dimensional state and action spaces, it is often easier to directly learn policies without estimating accurate system models. The resulting algorithms are parametric policy search algorithms inspired by expectation-maximization methods and can be employed for motor primitive learning. For lower-dimensional systems, Bayesian approaches to control can be shown to be able to cope with the optimization bias introduced by the model errors in model-based reinforcement learning. As a result, these methods can learn good policies at a rapid pace based on only little interaction of the system.

Currently, we are moving towards learning complex tasks, requiring the solution of a variety of hard problems. Among these are the decomposition of large tasks into movement primitives (MP), the acquisition and self-improvement of MPs, the determination of the number of MPs in a data set for some early steps in this direction), the determination of the relevant task-space, perceptual context estimation and goal learning for MPs, as well as the composition of MPs for new complex tasks. These questions are tackled in order to make progress towards fast and general motor skill learning for robotics.

Publications
1. Peters, J., M. Mistry, F. E. Udwadia, J. Nakanishi and S. Schaal: A unifying methodology for robot control with redundant DOFs. Autonomous Robots 24(1), 1-12, 2008.
2. Nakanishi, J., R. Cory, M. Mistry, J. Peters and S. Schaal: Operational Space Control: A Theoretical and Empirical Comparison. International Journal of Robotics Research 27(6), 737-757, 2008.
3. Peters, J. and S. Schaal: Learning to Control in Operational Space. International Journal of Robotics Research 27(2), 197-212, 2008.
4. Nguyen-Tuong, D., M. Seeger and J. Peters: Model Learning with Local Gaussian Process Regression. Advanced Robotics 23(15), 2015-2034, 2009.
5. Peters, J. and S. Schaal: Reinforcement Learning of Motor Skills with Policy Gradients. Neural Networks 21(4), 682-697
6. Peters, J. and S. Schaal: Natural Actor-Critic. Neurocomputing 71(7-9), 1180-1190, 2008.
7. Peters, J., K. Mülling, J. Kober, D. Nguyen-Tuong and O. Kroemer: Towards Motor Skill Learning for Robotics. Proceedings of the 14th International Symposium on Robotics Research (ISRR 2009)
8. Kober, J. and J. Peters: Policy Search for Motor Primitives in Robotics. Advances in Neural Information Processing Systems 21: Proceedings of the 2008 Conference, 849-856. (Eds.) Koller, D., D. Schuurmans, Y. Bengio, L. Bottou, Curran, Red Hook, NY, USA.
9. Kober, J., E. Oztop and J. Peters: Reinforcement Learning to adjust Robot Movements to New Situations. Proceedings of 2010 Robotics: Science and Systems Conference
10. Peters, J., K. Mülling and Y. Altun: Relative Entropy Policy Search. Proceedings of the Twenty-Fourth National Conference on Artificial Intelligence (AAAI-10), 2010.
11. Kober, J.; Muelling, K.; Kroemer, O.; Lampert, C.H.; Schoelkopf, B.; Peters, J. (2010). Movement Templates for Learning of Hitting and Batting, IEEE International Conference on Robotics and Automation.

 
Last updated: Tuesday, 22.02.2011