OBJECTS LAYOUT GRAPH FOR 3D COMPLEX SCENES
1
A. Adán,
2
P. Merchán,
2
S. Salamanca,
1
A.Vázquez,
1
M. Adán,
3
C. Cerrada
1
Escuela Superior de Informática. UCLM. Spain. Antonio.Adan@uclm.es
2
Escuela de Ingenierías Industriales. UEX. Spain. pmerchan@unex.es
3
Escuela T.S.I. Industriales. UNED. Spain. ccerrada@ieec.uned.es
ABSTRACT
This paper shows how to extract information about the
parts and their layout in a complex scene when a single
range image is available. In the worst case, the complexity
of the scene includes: no shape-restrictions, shades,
occlusion, cluttering, contact, surfaces viewed in oblique
angles and without textures. This work is a prerequisite
before carrying out further robot interaction actions in the
scene. The process is based on a novel 3D range data
segmentation technique that avoids most restrictions
imposed on other techniques. Making use of the 3D
segmented parts, the method carries out an objects
silhouette classification which allows us to perform a
layout graph of the objects in the scene. A brief
description of this method and experimental results are
presented throughout the paper.
1. MOTIVATION
Suppose that we have a single view of a complex scene
and a robot has to manipulate the objects that are in it. An
interaction (grasping, pushing, touching, etc) of a robot in
such a scene is highly difficult unless knowledge about
the parts and their layout in the scene is provided in
advance. In this paper we deal with the problem of parts
segmentation and their relative position in complex scenes
through an occlusion study. This becomes a serious
problem depending on the complexity degree of the scene.
For instance, in the case of no-textured images, any
processing technique on the intensity image is clearly
inefficient for extracting the objects that compose the
scene. That is why we have applied solutions through
range image segmentation techniques instead of image
processing ones. Figure 1 presents the real environment
with the components that we are using in our work: the
prototype of scene, the immobile range sensor and the
robot.
Range image segmentation strategies can be
categorized in edge-based approaches and region-based
approaches. Reference [1] offers evaluation and
experimental comparison among them. In edge-based
approaches, the points located on the edges are first
identified, followed by edge linking, contours and
surfaces definition. In this field, a wide variety of
algorithms, where edges or contours are segmented, can
be found ([2], [3], [4]). In region-based approaches a
number of seed regions are first chosen. These seed
regions grow by adding neighbour points based on some
compatibility threshold. Some methods based on region
growing are proposed in [5], [6] and [7].
Fig.1.Robot interaction model in a complex scene
On the other hand, the relative location of the segment
in the scene is a problem that can be solved by techniques
based on silhouettes. In [8] Super et al. use a part-based
shape retrieval method as a hypothesizer for the system.
So, they avoid the cost of comparing every object model
in the database. Serratosa et al. [9] define the function-
described graphs (FDG) that are applied for 3D matching
and human faces recognition. An aspect-graph approach
that measures the similarity between two views by a 2D
shape metric is presented in [10]. In [11] Sebastian et al.
proposes a recognition framework which is based on
matching shock graphs of 2D shape outlines.
All works cited above involve some kind of restriction:
object limited poses [10], several views of the scene [9],
without occlusion [10, 11], only for simplex scenes [8,9].
Our segmentation technique is related to the region-based
approaches but is different from most segmentation
strategies because a distributive segmentation notion is
introduced in the solution. This makes the method robust
and insensitive to restrictions imposed on other
Robot
Range Finder System
Scene: contact, shades, occlusion, cluttering
0-7803-9134-9/05/$20.00 ©2005 IEEE
III-433