Floating Visual Grasp of Unknown Objects
Vincenzo Lippiello, Fabio Ruggiero, and Luigi Villani
Abstract— A new method for fast visual grasp of unknown
objects using a camera mounted on a robot in an eye-in-
hand configuration is presented. The method is composed of
a fast iterative object surface reconstruction algorithm and of
a local grasp planner, evolving in a synchronized parallel way.
The reconstruction algorithm makes use of images taken by a
camera carried by the robot. A reconstruction sphere, virtually
placed around the object, is iteratively compressed towards
the object visual hull, dragging out the fingers attached to it.
Between two steps of the reconstruction process, the planner
moves the fingers, floating on the current reconstructed surface,
according to suitable quality measures. The fingers keep moving
until a local minimum is achieved, then a new object surface
estimation provided by the reconstruction process is considered.
Quality measures considering both hand and grasp proprieties
are adopted. Simulations are presented to show the performance
of the proposed algorithm.
I. INTRODUCTION
Grasping and manipulation tasks generally require a priori
knowledge about the object geometry. Autonomous operation
in unstructured environments is a challenging research field
and, especially the problem of grasping unknown objects,
has not been widely investigated yet.
One of the first approaches to grasping in unknown
environments can be found in [19], where visual control
of grasping is performed employing visual information to
track both object and fingers positions. A method to grasp an
unknown object using information provided by a deformable
contour model algorithm is proposed in [11]. Recently,
in [18], an omnidirectional camera is used to object shape
recognition while grasping is achieved on the basis of a
grasping quality measure, using a soft-fingered hand.
It is easy to recognize that two main tasks have to be
performed to achieve unknown objects grasping, namely,
object recognition/reconstruction and grasp planning.
Different methods have been proposed in the literature to
cope with 3D model reconstruction of objects. The main
differences rely on how the available images are processed
The research leading to these results has also been supported partially by
the SICURA National research project, which has received funding from the
Ministry of University and Research, and partially by the DEXMART Large-
scale integrating project, which has received funding from the European
Community’s Seventh Framework Programme (FP7/2007-2013) under grant
agreement ICT-216239. The authors are solely responsible for its content. It
does not represent the opinion of the Ministry of University and Research,
or the European Community. Neither the Ministry, nor the Community, is
responsible for any use that might be made of the information contained
therein.
The authors are with PRISMA Lab, Dipartimento di Informat-
ica e Sistemistica, Universit` a degli Studi di Napoli Federico II,
via Claudio 21, 80125, Naples, Italy {vincenzo.lippiello,
fabio.ruggiero, lvillani}@unina.it
and, of course, on the algorithms used for object reconstruc-
tion. A number of algorithms can be classified under the
so called volumetric scene reconstruction approach [3]. This
category can be further divided into two main groups: the
shape from silhouettes and the shape from photo-consistency
algorithms. Another method, proposed in [17], considers a
surface that moves towards the object under the influence of
internal forces, produced by the surface itself, and external
forces, given by the image data.
A technique for computing a polyhedral representation of
the visual hull [7] –the set of points in the space that are
projected inside each image silhouette– is studied in [5].
Other approaches rely on the use of apparent contours [2],
[12]; in these cases, the reconstruction is based on the spatio-
temporal analysis of deformable silhouettes.
On the other hand, grasp planning techniques rely upon the
choice of grasp quality measures used to select suitable grasp
points. Several quality measures proposed in the literature
depend on the grasp geometry and on the positions of the
contact points. Some of them are based on the properties of
the grasp matrix; others are based on the area of the polygon
created by the contact points or on the external resistent
wrench. Simple geometric conditions to reach an optimal
force closure grasp both in 2-D and in 3-D are found in [10].
The geometric properties of the grasp are used also in [8] to
define quality measures; moreover, suitable task ellipsoids
in the object wrench space are proposed to evaluate grasp
quality also with respect to the particular manipulation task.
A geometrical approach to obtain at least one force closure
grasp in 3D discretized objects is studied in [13], where two
algorithms are investigated: the first finds at least one force
closure grasp, while the second optimizes it to get a locally
optimum grasp.
Another class of quality measures is based on the eval-
uation of the capability of the hand to realize the optimal
grasp. Therefore, these measures depend on the hand con-
figurations [14]. To plan a grasp for a particular robotic
hand, quality measures depending both on grasp geometry
and hand configuration should be taken in account. In the
literature, only few papers address the whole problem of
grasping an object using a given robotic hand, able to reach
the desired contact points in a dexterous configuration. Some
examples can be found in [1], [4], [6], while a rich survey
of grasp quality measures can be found in [16].
In this paper, a new method for fast visual grasping of un-
known objects using a camera mounted on a robot in an eye-
in-hand configuration is presented. This method is composed
of an iterative object surface reconstruction algorithm and of
a local grasp planner, which evolve in a synchronized parallel
The 2009 IEEE/RSJ International Conference on
Intelligent Robots and Systems
October 11-15, 2009 St. Louis, USA
978-1-4244-3804-4/09/$25.00 ©2009 IEEE 1290