Human-Inspired Selection of Grasp Hypotheses for Execution on a Humanoid robot Markus Przybylski Tamim Asfour R¨ udiger Dillmann Institute for Anthropomatics Karlsruhe Institute of Technology Karlsruhe, Germany Email: {markus.przybylski,asfour,dillmann}@kit.edu Ren´ e Gilster Heiner Deubel Department Psychologie Ludwig-Maximilians-Universit¨ at Munich, Germany Email: {rene.gilster,heiner.deubel}@psy.lmu.de Abstract—Future humanoid robots will need the capability to grasp and manipulate arbitrary objects in order to assist people in their homes, to interact with them and with the environment. In this work, we present an approach to grasp known objects. Our approach consists of an offline step for grasp planning, a rating step which determines the human likeness of the grasps and an execution step, where the most suitable grasp is performed on a humanoid robot. We especially focus on the rating step where we use human grasping data to rate pre-computed grasp hypotheses from our grasp planner in order to select the most human-like feasible grasp for execution on the real robot. We present the details of our method together with experiments on our ARMAR-III humanoid robot. I. INTRODUCTION The capability to grasp and manipulate objects is crucial for future service robots that will help people in their daily lives. In robotics, grasping of known objects consists of several steps. In the first step, a grasp planning method is used to generate a set of grasp hypotheses, given the object, the robotic hand, and possibly obstacles in the environment. In the second step a feasible grasp has to be selected for execution on the robot in the third step. Grasp selection is usually subject to several constraints. Grasps can be rated depending on their force-closure score [1] which indicates how well the grasps can resist external forces and torques. In real-world environments also obstacles have to be taken into account and the robot’s embodiment can turn many grasp hypotheses infeasible, because they are not reachable due to kinematic constraints. Yet, with grasp planners able to generate hundreds or thousands of grasps, often many feasible grasps remain to select from, despite the constraints stated above. Also, the force-closure score does not tell whether a human being would choose a specific grasp or not. It seems desirable for humanoid robots that they act human-like. We are convinced that a humanoid robot that performs tasks and grasps objects the way a human would do will be more readily accepted by people than a robot grasping objects in unintuitive ways. We therefore believe that grasp selection by humanoid robots should be motivated by human grasping. This leads to the question how to decide which grasps are more human-like than others. In this paper we present an intuitive approach to rate grasps generated by our grasp planner, given measured data from the human grasping process. II. RELATED WORK Before we describe our approach in detail we give an overview of related work in the field. In the recent years, a multitude of humanoid robots have been presented, for exam- ple HRP2 ([2],[3]), ARMAR ([4],[5]), Dexter [6], Justin [7], or NASA Robonaut [8] where an important focus of research was grasping and manipulation, also including dealing with furniture and doors. Work has also been conducted on pre- grasp manipulation [9], where a flat object is slided to an intermediate position where it can be grasped more easily. In the area of grasp planning for known objects, a number of simulation-based approaches were introduced based on simulation environments such as GraspIt! [10], OpenRAVE [11] and Simox [12]. In the context of the grasping by parts paradigm several authors proposed shape approximation techniques to prune the search space for grasp planning. The first of these were Miller et al. [13] who manually decomposed objects into basic shape primitives such as boxes, spheres, cylinders and cones. Goldfeder et al. [14] used superquadrics and Huebner et al. [15] proposed an approach to decompose objects into a set of minimum volume bounding boxes. Fol- lowing the same line of thinking, but guided by the idea of improved shape approximation accuracy, we proposed in our earlier work the use of object representations based on the medial axis [16] and the medial axis transform [17]. Aleotti et al. [18] presented an approach for grasp planning based on the Reeb Graph. Vahrenkamp et al. [19] proposed a method that integrates grasp planning with searching for collision free grasp motions. As for the human component in grasp selection, recently a human-guided grasp measure was introduced by Balasubrama- nian et al. [20]. In their experiments the authors let test persons interact haptically with a robot so the robot arm would grasp the objects in a way the test persons considered intuitive. The authors identified orthogonality of wrist orientation towards the object’s principal axis as a measure that is optimized by humans when selecting grasps for robots.