IEEE Instrumentation and Measurement
Technology Conference
Anchorage, AK, USA, 21-23 May 2002
Abstract - This paper introduces an automatic approach for
registration estimation between successive viewpoints of a laser
range camera that takes advantage of the raw measurements and
does not require any external device for pose estimation nor
complex feature extraction or triangulation. Assuming only object
rigidity and some overlap between the scan areas, the approach
allows to estimate the six rotation and translation parameters that
link 3-D scans gathered from different viewpoints. A compact
modified Gaussian sphere representation is used to encode a simple
planar patch approximation of the objects surface and to validate
mapping between the measurements as the appropriate rotation
and translation parameters are computed. This solution results in
an important reduction of the computational workload and a
sufficient accuracy for most robot navigation applications. The
proposed approach is demonstrated in an experimental context
using real range measurements collected from a series of
viewpoints.
I. INTRODUCTION
Building virtual representations of 3D environments from
range measurements requires that data are gathered from a
large number of viewpoints. This requirement results from the
complexity of objects to be modelled, from the limited field of
view of sensors and from occlusions that occur between
objects. Each dataset gathered from a given point of view is
defined with respect to a local sensor-based reference frame.
As a result, the sensor position and orientation at each
viewpoint must be precisely estimated to ensure that the
information obtained from every source is merged in a
consistent way to build a 3D model. The problem of
registration consists of determining the geometric relationship
that exists between different views provided by the sensor. An
imprecise registration between viewpoints prevents from
computing reliable models for collision avoidance or fine
interaction between a robot and its environment [11].
The sensor pose can be measured with external means such
as magnetic position and orientation trackers, robotic arms or
even CCD cameras providing images from which the sensor
position and orientation can be extracted. The latter solution
implies very complex image processing and pattern
recognition algorithms that are time consuming and rarely
fully reliable. The first two approaches appear to be more
realistic. A magnetic position and orientation tracking device,
such as the Fastrak system commercialized by Polhemus Inc.
has been tested in our robotic workcell. Unfortunately, the
magnetic fields used by the device to track the pose appear to
be very sensitive to the environment. In an experimental setup
containing quite a large number of metallic parts such as
computer boxes, power supplies and robotic equipments, such
a device does not succeed in providing the required pose
information except in very limited circumstances and under
constrained displacements.
When a robotic arm is used to move the sensor from one
viewpoint to another, the internal encoders of the robot also
provide a good estimate of the sensor position and orientation.
But our experiments revealed that there is still room for
refinement on this information in order to enhance the quality
of the virtual representation of the environment. Moreover,
the sensor is then constrained to the robot physical workspace
and cannot get an access to narrow areas of the environment.
An interesting solution to estimate range sensor registration
between successive viewpoints without any peripheral devices
is to take advantage of the raw range data provided by the
sensor. Assuming that there is an overlap between the areas of
the scene that are measured from each viewpoint, it becomes
possible to search for some matching characteristics in both
sets of information and then compute the necessary
registration information that would make the projections of
those matching elements to superpose.
In spite of the fact that the registration problem between
range measurements has been studied for a while in computer
vision, no extensive and definitive solution has been found
yet. Many variations to the widely known iterative closest
point (ICP) algorithm [1] have been proposed to match
characteristic point sets [3, 10], curves, meshes [2, 4] or
parametric surfaces [8]. Some of them use both range and
intensity data, also provided by most range sensors, to
improve their selection of control points that are to be
matched [7, 12]. These algorithms generally provide good
results but the search for characteristic curves or surfaces is
very complex and time consuming.
Moreover, research works on the topic of registration
generally assume that full range images are directly available
from the sensors. As a result, they search for matching
characteristics between such full images and compute
geometrical transformations from there. Such a framework
does not correspond to the reality because the majority of
range sensors currently available on the market or even
prototypes found in laboratories do not provide such full
images by themselves. They rather generate single points or
scan lines of range measurements [6]. Those sensors that
generate full images rely on an external mechanical device to
translate the sensor or change its orientation [9]. This solution
compares to the use of a robot to move the sensor and is
sensitive in terms of registration errors.
Scan-Based Registration of Range Measurements
C. Chen, P. Payeur
Vision, Imaging, Video and Audio Research Laboratory
School of Information Technology and Engineering
University of Ottawa
Ottawa, Ontario, Canada
[chenadiu,ppayeur]@site.uottawa.ca
0-7803-7218-2/02/$10.00 ©2002 IEEE