Towards Imitation Learning of Grasping Movements by an Autonomous Robot Jochen Triesch 1 , Jan Wieghardt 2 , Eric Ma¨ el 2 , and Christoph von der Malsburg 2,3 1 Department of Computer Science University of Rochester, Rochester (NY), USA triesch@cs.rochester.edu 2 Institut f¨ ur Neuroinformatik, Ruhr-Universit¨at Bochum D-44780 Bochum, Germany {wieghardt,mael,malsburg}@neuroinformatik.ruhr-uni-bochum.de 3 Lab. for Computational and Biological Vision University of Southern California, Los Angeles (CA), USA Abstract. Imitation learning holds the promise of robots which need not be programmed but instead can learn by observing a teacher. We present recent efforts being made at our laboratory towards endowing a robot with the capability of learning to imitate human hand gestures. In particular, we are interested in grasping movements. The aim is a robot that learns, e.g., to pick up a cup at its handle by imitating a human teacher grasping it like this. Our main emphasis is on the computer vision techniques for finding and tracking the human teacher’s grasping fingertips. We present first experiments and discuss limitations of the approach and planned extensions. 1 Introduction Imitation learning has received much attention recently, since researchers share the hope that it can reduce the amount of programming or teach-in required for useful robot behavior and replace it with efficient learning. Bakker & Ku- niyoshi define imitation as follows: “Imitation takes place when an agent learns a behavior from observing the execution of that behavior by a teacher.” [1]. During classical teach-in a human operator has to teach the robot the desired arm tracectories by explicitly moving all the robot’s joints in desired positions along the trajectory – a very cumbersome process. Obviously, it would be far more efficient, if the robot simply learned by following the example of a human doing the task. Imitation learning is thought to be more efficient than, e.g., re- inforcement learning because the robot is provided with very rich information about the solution of the problem. While in reinforcement learning the robot typically needs hundereds of trials before behaving usefully, imitation promises one-shot learning for even difficult tasks. A problem of imitation learning is that it requires complex perceptual abilities which allow the robot to “observe” the A. Braffort et al. (Eds.): GW’99, LNAI 1739, pp. 73–84, 1999. c Springer-Verlag Berlin Heidelberg 1999