Int J Comput Vis (2012) 96:162–174
DOI 10.1007/s11263-011-0457-8
Planar Motion Estimation and Linear Ground Plane Rectification
using an Uncalibrated Generic Camera
Pierluigi Taddei · Ferran Espuny · Vincenzo Caglioti
Received: 28 May 2009 / Accepted: 3 May 2011 / Published online: 19 May 2011
© Springer Science+Business Media, LLC 2011
Abstract We address and solve the self-calibration of a
generic camera that performs planar motion while viewing
(part of) a ground plane. Concretely, assuming initial sets
of correspondences between several images of the ground
plane as known, we are interested in determining both the
camera motion and the geometry of the ground plane. The
latter is obtained through the rectification of the image of
the ground plane, which gives a bijective correspondence
between pixels and points on the ground plane.
We initially propose a method to determine the camera
motion by using the motion flow between pairs of images.
We perform this step with no need of camera calibration.
Our solution requires the fixed ground point of the camera
motion to be visible on both images.
Once the camera motion is known, either by using our
method or by other alternative means (e.g. GPS-based), we
show that the rectification of the ground plane can be deter-
mined linearly from at least three images up to a scale factor.
Experimental results on real images are presented at the end
of the paper to validate the proposed methods.
This paper was written during an internship at Politecnico di Milano
of Ferran Espuny, who received the financial support of the Spanish
project MTM2006-14234-C02-01.
P. Taddei ( )
Joint Research Centre of the European Commission, Ispra, Italy
e-mail: pierluigi.taddei@polimi.it
F. Espuny
Dept. d’Àlgebra i Geometria, Universitat de Barcelona,
Barcelona, Spain
e-mail: fespuny@ub.edu
V. Caglioti
Politecnico di Milano, Milano, Italy
e-mail: vincenzo.caglioti@polimi.it
Keywords Self-calibration · Plane rectification · Visual
odometry · Generic camera · Motion flow · Planar motion
1 Introduction
Robot localization is a fundamental process in mobile
robotics applications. One way to determine the displace-
ments and measure the movement of a mobile robot is us-
ing dead reckoning systems (such as monitoring the wheels
revolutions or integrating accelerometers output). However
these systems are not reliable since they provide noisy mea-
surements and tend to diverge after few steps (Borenstein
and Feng 1996).
Visual odometry, i.e. methods based on visual estimation
of the motion through images captured by one or more cam-
eras, is exploited to obtain more reliable estimates. Many
approaches to visual odometry are based on perspective
cameras. Due to the narrow viewing cone of this camera
model, the persistence of features during an image sequence
is short, increasing the error cumulation. On the other hand,
visual odometry systems based on panoramic cameras re-
quire accurate calibration. These solutions are summarized
in Sect. 2.
Our purpose is to work with uncalibrated general cam-
eras, not necessarily central, so as to benefit from the pos-
sibility of panoramic viewing, leading to long feature per-
sistence, and the simplicity of set-up, avoiding the need for
calibration. The generic camera model, which associates one
projection ray to each individual pixel, represents the most
general mathematical model to describe a projection system
(Grossberg and Nayar 2001; Sturm and Ramalingam 2004).
Under this model, the relation among points on the ground
plane and the image plane is not parametric and thus stan-
dard visual odometry techniques cannot be applied.