Unions of Balls for Shape Approximation in Robot Grasping
Markus Przybylski, Tamim Asfour and R¨ udiger Dillmann
Abstract— Typical tasks of future service robots involve
grasping and manipulating a large variety of objects differing
in size and shape. Generating stable grasps on 3D objects is
considered to be a hard problem, since many parameters such
as hand kinematics, object geometry, material properties and
forces have to be taken into account. This results in a high-
dimensional space of possible grasps that cannot be searched
exhaustively. We believe that the key to find stable grasps in an
efficient manner is to use a special representation of the object
geometry that can be easily analyzed. In this paper, we present
a novel grasp planning method that evaluates local symmetry
properties of objects to generate only candidate grasps that
are likely to be of good quality. We achieve this by computing
the medial axis which represents a 3D object as a union of
balls. We analyze the symmetry information contained in the
medial axis and use a set of heuristics to generate geometrically
and kinematically reasonable candidate grasps. These candidate
grasps are tested for force-closure. We present the algorithm
and show experimental results on various object models using
an anthropomorphic hand of a humanoid robot in simulation.
I. INTRODUCTION AND RELATED WORK
The increasingly aging society will benefit from intelligent
domestic robots that are able to assist human beings in
their homes. The ability to grasp objects is crucial to many
supporting activities a service robot might perform, such as
serving a drink, tidying up or giving water to the flowers, for
example. Human beings perform grasps intuitively on almost
any kind of object. In contrast, grasping is a challenging
problem for robots. Knowledge of hand kinematics, object
geometry, physical and material properties is necessary to
find a good grasp, making the space of possible candidate
grasps intractibly large to search in a brute-force manner.
This is especially the case for modern dexterous robot hands
with an increasing number of degrees of freedom.
A. Grasp Planning
Many approaches for grasp planning have been developed
in the past. Grasp synthesis on the contact level concen-
trates primarily on finding a predefined number of contact
points without considering hand geometry [1]. Some work
on automatic grasp synthesis focusses especially on object
manipulation tasks ([2],[3]). Shimoga [2] presents a survey
on measures for dexterity, equilibrium, stability, dynamic
behavior and algorithms to synthesize grasps with these
properties. Li et al. [4] recorded grasps for basic objects
using motion capturing and used this information to perform
shape matching between the inner surface of the hand and
This work was supported by EU through the project GRASP.
All authors are with the Institute for Anthropomatics, Karlsruhe In-
stitute of Technology, Karlsruhe, Germany {markus.przybylski,
asfour, dillmann}@kit.edu
novel objects. The resulting candidate grasps were clustered
and pruned depending on the task.
Since simulators such as GraspIt! [5], OpenRAVE [6]
and Simox [7] have become available it is possible to
simulate candidate grasps with robot hand models on object
models, where hand kinematics, hand and object geometries
as well as physical and material properties and environmental
obstacles can be taken into account. In the recent past, many
researchers developed grasp planning methods based on
these simulation environments. Berenson et al. (see [8],[9])
developed a grasp scoring function that considers not only
grasp stability but takes also environmental obstacles and
kinematic reachability into account. In [10] an integrated
grasp and motion planning algorithm is presented where
the task of finding a suitable grasping pose is combined
with searching collision free grasping motions. Ciocarlie
et al. [11] introduced the concept of eigengrasps which
allows for grasp planning in a low-dimensional subspace
of the actual hand configuration space. Goldfeder et al.
[12] used the eigengrasp planner to build a grasp database
containing several hands, a multitude of objects and the
associated grasps. They used Zernike descriptors to exploit
shape similarity between object models to synthesize grasps
for objects by searching for geometrically similar objects in
their database. They extended this approach to novel objects
[13], where partial 3D data of an object are matched and
aligned to known objects in the database to find suitable
grasps.
A number of simulator-based approaches to grasp planning
rely on shape approximation of 3D object models. The basic
idea underlying these approaches is that many objects can be
decomposed into component parts that can be represented by
simplified geometric shapes. Then rules are defined to gene-
rate candidate grasps on these components which allows for
pruning of the search space of possible hand configurations.
This concept is also known as grasping by parts. The first
method in this context was presented by Miller et al. [14]
who used boxes, spheres, cylinders and cones to approximate
the shape of the object. However, the user has to perform the
decomposition of the object into these primitives manually.
Goldfeder et al. [15] presented a method that automatically
approximates an object’s geometry by a tree of superquadrics
and generates candidate grasps on those. Huebner et al. [16]
developed an algorithm that decomposes objects into a set of
minimum volume bounding boxes. While these approaches
significantly reduce the complexity of grasp planning, this
comes at a price. Many grasps a human would intuitive-
ly use might not be found due to poor object geometry
approximation. Especially box decomposition yields only a
The 2010 IEEE/RSJ International Conference on
Intelligent Robots and Systems
October 18-22, 2010, Taipei, Taiwan
978-1-4244-6676-4/10/$25.00 ©2010 IEEE 1592