Calibration of a three-dimensional reconstruction
system using a structured light source
Franck S. Marzani
Universite ´ de Bourgogne
LE2I, UFR Sciences et Techniques
Aile SPI, BP 47870
F-21078 Dijon Cedex
France
E-mail: franck.marzani@u-bourgogne.fr
Yvon Voisin
Lew F. C. LewYan Voon
Alain Diou
Universite ´ de Bourgogne
LE2I, IUT Le Creusot
12 rue de la Fonderie
F-71200 Le Creusot
France
Abstract. We present a method for calibrating a range finder system
composed of a camera and a structured light source. The system is used
to reconstruct the three-dimensional (3-D) surface of an object. This is
achieved by projecting a pattern, represented by a set of regularly
spaced spots, on the surface of the object using the structured light
source. An image of the illuminated object is next taken and by analyzing
the distortion of the projected pattern, the 3-D surface of the object can
be reconstructed. This reconstruction operation can be envisaged only if
the system is calibrated. Instead of using a classical calibration method,
which is based on the determination of the matrices that characterize the
intrinsic and extrinsic parameters of the system, we propose a fast and
easy to set up methodology, consisting of taking a sequence of images
of a plane in translation on which a set of regularly spaced spots is
projected using the structured light projection system. Next, a relation-
ship between the position of the plane and the coordinates of the spots in
the image is established. Using this relationship, we are able to deter-
mine the 3-D coordinates of a set of points on the object’s surface know-
ing the 2-D coordinates of the spots in the image of the object taken by
the range finder system. Finally, from the 3-D coordinates of the set of
points, the 3-D surface of the object is reconstructed. © 2002 Society of
Photo-Optical Instrumentation Engineers. [DOI: 10.1117/1.1427673]
Subject terms: calibration; structured light; range finder; stereovision; three-
dimensional reconstruction.
Paper 200502 received Dec. 27, 2000; revised manuscript received July 9, 2001;
accepted for publication July 10, 2001.
1 Introduction
To obtain the 3-D surface information of an object, one
usually uses a passive or active stereovision method. The
most commonly employed passive method consists of tak-
ing two images of a scene at two different shooting angles
using either two cameras or only one camera for which an
acquisition in two different positions is done. Then, the 2-D
coordinates of a point of the scene in the two images are
extracted. Now, if we suppose that both the geometrical
relationship between the two cameras or the displacement
of the unique camera and their intrinsic parameters are
known, the 3-D coordinates of this point can be deduced
from the 2-D coordinates by triangulation. Although this
method is accurate and often used,
1
it requires high-
performance image processing tools so as, on the one hand,
to extract the points to be reconstructed from one of the two
images,
2
and on the other hand, to search along the epipolar
line for their corresponding points in the second image.
Moreover, only characteristic points, with a high gradient
or a high texture, for example, can be detected.
The active stereovision methods offer an alternative ap-
proach to the use of two cameras. The word ‘‘active’’ here
means that energy is emitted into the environment. Such a
system is also called a range finder system. In practice, it
consists of replacing one of the two cameras with a light
projection system used to project either a light beam or a
set of structured light beams on the scene.
3–5
In the first
case, a sequence of images is taken with the camera as the
light beam scans the scene, whereas in the second case,
only one image is taken. Supposing again, as for the pas-
sive methods, that both the geometrical relationship be-
tween the camera and the light projection system a laser
emitter and the intrinsic parameters of the system are
known, then the 3-D coordinates of the points of the illu-
minated scene can be determined. This is done by analyz-
ing the position of the light spot in the sequence of images
of the scene in the case of a single light beam or the image
of the distortion pattern of the structured light grid on the
surface of the object in the case of a set of structured light
beams. A survey of the different kinds of structured light
system is given by Batlle et al.
6
In our research work, we have focused on the calibration
of such a range finder system. The calibration method that
we propose in this paper is fast and easy to set up. It is
based on the use of a structured light source composed of
an array of 361 (1919) laser beams arranged in such a
way that each beam is directed with respect to its neighbors
according to a fixed and known angle. Our method is origi-
484 Opt. Eng. 41(2) 484– 492 (February 2002) 0091-3286/2002/$15.00 © 2002 Society of Photo-Optical Instrumentation Engineers