Partial Surface integration based on Variational Implicit Functions and Surfaces
for 3D model building
P. Claes
*
ESAT/PSI/Medical Image Computing
Katholieke Universiteit Leuven
D. Vandermeulen
ESAT/PSI/Medical Image Computing
Katholieke Universiteit Leuven
L. Van Gool
ESAT/PSI/VISICS
Katholieke Universiteit Leuven
P. Suetens
ESAT/PSI/Medical Image Computing
Katholieke Universiteit Leuven
Abstract
Most three-dimensional acquisition systems generate sev-
eral partial reconstructions that have to be registered and
integrated for building a complete 3D model. In this paper,
we propose a volumetric shape integration method, consist-
ing of weighted signed distance functions represented as
variational implicit functions (VIF) or surfaces (VIS). Tex-
ture integration is solved similarly by using three weighted
color functions also based on VIFs. Using these continuous
(not grid-based) representations solves current limitations
of volumetric methods: no memory inefficient and resolu-
tion limiting grid representation is required. The built-in
smoothing properties of the VIS representations also im-
prove the robustness of the final integration against noise
in the input data. Experimental results are performed on
real-live, noiseless and noisy synthetic data of human faces
in order to show the robustness and accuracy of the integra-
tion algorithm.
1. Introduction
Due to the limited field of view of 3D acquisition sys-
tems, three-dimensional models are often assembled from
several partial reconstructions from different viewpoints.
We will refer to the reconstruction from a single viewpoint
as a patch. The number of views necessary is determined
by the complexity of the object, the required detail, and
the intrinsic resolution of the cameras involved. Combin-
ing several patches into a single model traditionally involves
two main phases. First the patches need to be aligned into
a common coordinate frame, which is called the registra-
tion phase. Secondly, the registered patches need to be in-
tegrated into a single entity, which is done in the integra-
tion phase. In this paper we concentrate on the integration
*
corresponding author: pclaes@uz.kuleuven.ac.be
phase. For the registration we refer to [1] for an overview
of different registration tasks, problems and methods.
Surface integration methods differ by the type of input
data used, unorganized or connected point sets, and the type
of surface representation, parametric or implicit. Examples
of methods integrating unorganized point sets are [3, 4], us-
ing parametric surfaces, and [5, 6], using implicit surface
representations. Since these methods do not require a spe-
cially organized input set, they can be applied in more gen-
eral situations. At the same time, however, these methods
are less robust against noisy data, outliers and cannot reli-
ably integrate high curvature regions. The integration can
be improved using structured input data and parametric sur-
faces such as in [7, 8], but according to [2], they can still
fail in areas of high curvature. The more successful ap-
proaches use structured data, while hiding the topological
problems in the previous parametric surface based methods
by using implicit volumetric representations. Johnson et al.
[9, 10] create surface occupancy grids, which were the earli-
est and simplest volumetric representations. However, final
surface extraction, based on ridge-detection in the surface
likelihood, is not very robust [2]. In [2, 11, 12], volumetric
integration algorithms are presented that construct a grid-
based weighted signed distance function to the final surface.
Triangular surface representations are extracted by March-
ing Cubes (MC) [13]. The methods differ in the way the
implicit surface is constructed and the volumetric data orga-
nized. Signed distance functions are superior to occupancy
grids since they can unify integration and registration into
one step [1, 14, 15], while traditionally they were performed
separately.
The amount of literature about texture integration or
blending is much smaller than shape integration. In [9, 10]
texture blending is done by weighted averaging of overlap-
ping textures from the original contributing patches. Tex-
ture weights are a function of the angle between the con-
sensus surface normal and the viewing direction or relative
Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM’05)
1550-6185/05 $20.00 © 2005 IEEE