An open system for 3D data acquisition
from multiple sensor
Francesco Isgr` o, Francesca Odone, Alessandro Verri
INFM - DISI
Universit` a di Genova
Genova, Italy
{fisgro,odone,verri}@disi.unige.it
Abstract— This paper describes a work in progress on a multi-
sensor system for 3D data acquisition. The system core structure
is a 3D-range scan based on the well known active triangulation
procedure and made of a camera, a laser light emitter and a
software driven motor. The core system allows us to acquire dense
point clouds of objects of about 50 cm. The system today hosts
a second camera and thus is able to perform 3D reconstruction
from two slightly different viewpoints and produce more dense
point clouds. Also, since the motor can be driven back to the
original position multiple scans can take place, to obtain smooth
surfaces, and multiple information, such as texture and reliability
measures. An alternative way of obtaining texture information
is by means of a linear camera, also included in the system. We
present results obtained with the current system, and describe
extensions of the system in estimating noise and producing a
more complex geometry description.
I. I NTRODUCTION
The target of machine vision is to make computers see. In
less philosophical words the main objective of this discipline
is to extract some kind of information from images, that
can be used for preforming some tasks. Among the various
tasks that may need computer vision modules we mention
remote sensing [14], [13], inspection [17], robot guidance [19],
medical tasks [7], etc.
For most applications some kind of 3D description of the
scene is required. A variety of 3D reconstruction algorithms
do exist, and they have been grouped in several classes
of algorithms [23], [6]. We can think of dividing all the
approaches into two main classes named passive and active
methods.
In the first class fall all those methods that not use any
kind of energy to help the sensors, such as stereopsis [21] or
shape from shading [25]. They do count only on the imaging
hardware, so that they need, in general, very simple set-ups,
but have a certain number of challenges to overcome.
Active methods project energy (e.g. a pattern of light, sonar
pulses) on the scene and detect its position to perform the
measure; or exploit the effect of controlled changes of some
sensor parameters (e.g. focus). Active range sensors exploit
a variety of physical principles; examples are radars and
sonars [24], Moir` e interferometry [10], focusing [16], and
triangulation [3].
The system we describe in this paper is based on the
active triangulation paradigm. The basic geometry for an active
Fig. 1. Structured light systems use triangulation methods to obtain the 3D
measures. This is a description of the system in the camera reference frame:
Z is the optical axis, O the origin, b the baseline, i.e., the distance between
the optical centre and the laser.
triangulation system is shown in Figure 1. A light projector
(typically a laser) is placed at a certain distance from the centre
of projection of a pin-hole camera. The projector emits a plane
of light intersecting the scene surfaces in a planar curve called
the stripe, which is observed by the camera. If the position of
the camera with respect the laser plane is known it is possible
to recover the 3D by simple triangulation (see Figure 1).
The system we are developing (shown in Figure 2) is a
multi-sensor system, the core of which is a multi-camera
3D range scan based on the active triangulation principle.
The objects are scanned while moving on a conveyor-belt
controlled by a software driven motor. The high accuracy of
the motor control permits to have easily multiple scans of
the same objects. The advantage of a multi-camera system is
that some problems generated by self-occlusion on the object
surface can be overcome since there are different view-points.
The measurements from the different views should not need
to be registered (e.g, using an ICP algorithm [2]), as the
system is calibrated and the measures are with respect the
same reference frame.
In the paper we discuss the state of the art of the system,
and, being a work in progress, we also describe the future
developments we are planning in terms of algorithms and
hardware components.
The paper is structured as follows. The next section de-
scribes the state of the system and how it currently works. In
Section III we discuss the advantages of the addition of more
Proceedings of the Seventh International Workshop on Computer Architecture for Machine Perception (CAMP’05)
0-7695-2255-6/05 $20.00 © 2005 IEEE