Shapes to Synchronize Camera Networks
Richard Chang
richard.chang@isir.fr
Siohoi Ieng
sio-hoi.ieng@upmc.fr
Ryad Benosman
ryad.benosman@upmc.fr
Institut of Intelligent Systems and Robotics - CNRS, University Pierre and Marie Curie - Paris6
Abstract
The synchronicity is a strong restriction that in some
cases of wide applications can be difficult to obtain.
This paper studies the methodology of using a non
synchronized camera network. We consider the cases
where the frequency of acquisition of each element of
the network can be different. The following work in-
troduces a new approach to retrieve the temporal syn-
chronization from the multiple unsynchronized frames
of a scene. The mathematical characterization of the
3D structure of scenes, is used as a tool to estimate syn-
chronization value, combined with a statistical stratum.
This paper presents experimental results on real data
for each step of synchronization retrieval.
1 Introduction
The synchronization operation is a task that com-
plexifies many vision operations as the number of cam-
eras becomes higher : cameras calibration, 3D recon-
struction, frames synchronization, etc... Baker and
Aloimonos [1], Han and Kanade [4] introduced pio-
neering approaches of calibration and 3D reconstruc-
tion from multiple views. Works on synchronization of
cameras from images can be found in [12, 9]. Their
aim is to retrieve synchronization in order to compute
correctly 3D structures from a set of cameras. A solu-
tion is to set hardware synchronization as in [6]. But
this kind of method cannot be appliable because of spa-
tial constraints. In these cases, a software based syn-
chronization can be a way to solve this problem. Most
of the former works exclude cases of heavy and/or non
linear desynchronization as in [10, 11], or set special
constraints on the scene or on the geometry of the cam-
eras [8, 2]. In this paper, we introduce a new synchro-
nization technique from 3D shapes. From all available
frames which can be synchronized or not, 3D struc-
tures are computed regardless they are correct or not.
We will show that correct ones are generated only from
synchronized frames. We then introduce a statistical ap-
proach which will show that correct shapes reconstruc-
tions (synchronized frames) occur more frequently than
distorted ones (non synchronized frames).
We will also explain the characterization of shape that
allows the discrimination between correct and wrong
reconstructions is possible. This paper is organized as
follows. Section two describes the formal approach of
our method. In section three, we will describe the syn-
chronization algorithm. The section four presents ex-
perimental results of the synchronization of a camera
network.
2 Problem formalization
2.1 Shape criterion for synchronization.
It is reasonable to assume that correct reconstruc-
tions are possible if frames are synchronized and that
unsynchronized frames lead likely to distorted results.
We will prove in this section that this assumption
is mathematically true : ”correct reconstructions” is
equivalent to ”synchronized frames” if observed objects
are rigid bodies. This can be done by examining simple
planar motions.
Let P
1
,P
2
,P
3
and P
4
be four collinear points viewed
by C
R
and C
L
of centers O
R
and O
L
(see figure 1).
Since the P
i
are collinear, we have the following rela-
tions :
P
1
P
2
= KP
1
P
4
and P
3
P
2
= M P
3
P
4
(1)
K and M are constant scalars and we define L =
||P
1
P
4
||.
When the cameras C
R
and C
L
are synchronized, we
978-1-4244-2175-6/08/$25.00 ©2008 IEEE