ROUGH-TERRAIN MOBILE ROBOT LOCALIZATION USING STEREOVISION
Annalisa Milella
Institute of Intelligent Systems for Automation
National Research Council
Via Amendola 122 D/O, 70126 Bari, Italy
milella@ba.issia.cnr.it
Giulio Reina
Department of Innovation Engineering
University of Salento
Via Monteroni, 73100 Lecce, Italy
giulio.reina@unile.it
ABSTRACT
Mobile robots are increasingly being used in high-risk
rough terrain situations, such as reconnaissance, planetary
exploration, safety and rescue applications. Conventional
localization algorithms are not well suited to rough terrain,
since sensor drift and the dynamic effects occurring at wheel-
terrain interface, such as slipping and sinkage, largely
compromise their accuracy. In this paper, we follow a novel
approach for 6-DoF ego-motion estimation, using stereovision.
It integrates image intensity information and 3D stereo data
within an Iterative Closest Point (ICP) scheme. Neither a-priori
knowledge of the motion and the terrain properties nor inputs
from other sensors are required, while the only assumption is
that the scene always contains visually distinctive features,
which can be tracked over subsequent stereo pairs. This
generates what is usually referred to as visual odometry. The
paper details the various steps of the algorithm and presents the
results of experimental tests performed with an all-terrain
rover, proving the method to be effective and robust.
1. INTRODUCTION
In order for a mobile robot to navigate autonomously over
long distances on uneven surfaces, a method for accurately
tracking the pose of the robot is primarily needed.
Dead reckoning, based on data coming from wheel
encoders, is a widely used localization method. This technique
is easy to implement, and allows good short-term accuracy and
very high sampling rate. However, dead reckoning systems are
not well suited to long-range navigation and rough terrains,
since they generally do not consider the physical characteristics
of the vehicle and of the terrain it is traversing. Moreover,
wheel slippage, sinkage, and sensor drift may cause errors that
accumulate without bound over time unless an additional
absolute localization system is employed for sporadic robot
position updates [1, 2].
In this work, we follow a different approach, called visual
odometry or ego-motion [3]. The basic idea of visual odometry
is that of estimating the motion of the robot by tracking features
of the environment detected with an on-board camera.
Similarly to conventional dead reckoning, this technique can
lead to error accumulation. However, since video sensors are
exteroceptive devices, that is, they acquire information from
the robot’s environment, visual odometry is not affected by
wheel slippage and sinkage. Moreover, it has been
demonstrated that vision allows better results for most sensor
combinations [4, 5]. Several visual odometry methods have
been proposed in the last decades, using either single cameras
[5, 6, 7] or stereo vision [4, 7, 8, 9], which mainly differ
depending on the feature tracking method and the
transformation applied for estimating the camera motion. For
instance, in [4], odometry provides an estimation of the
approximate robot motion that allows a search area to be
selected for improved feature tracking, and a maximum-
likelihood formulation is employed for motion computation. In
[5], the visual module uses a variation of Benedetti and
Perona’s algorithm for feature detection, and correlation for
feature tracking. Robustness is improved by integrating visual
data with IMU using a Kalman filter. Finally, in [7], robust
visual motion estimation is achieved using preemptive
RANSAC [10], followed by iterative refinement.
In this paper, an algorithm for 6-DoF ego-motion
estimation is proposed, which incorporates image intensity
information and 3D stereo data in the well-known Iterative
Closest Point scheme. ICP was originally introduced by Besl
[11], for the registration of digitized data from a rigid object
with an idealized geometric model. Here, the potentialities of
ICP are investigated for the case of visual odometry using
stereovision. Specifically, two basic problems of ICP are
addressed: the susceptibility to outliers, and the failure when
dealing with large displacements. As an extension of these
issues, another drawback of ICP is its inability to segment input
data [11]. Typical solutions use odometry information for
1 Copyright © 2007 by ASME
Proceedings of IMECE2007
2007 ASME International Mechanical Engineering Congress and Exposition
November 11-15, 2007, Seattle, Washington, USA
IMECE2007-41397