Vision-based Monte Carlo Self-localization for a Mobile Service
Robot Acting as Shopping Assistant in a Home Store
*
H.-M. Gross, A. Koenig, H.-J. Boehme, and Ch. Schroeter
Department of Neuroinformatics, Ilmenau Technical University, 98684 Ilmenau, Germany
Horst-Michael.Gross@tu-ilmenau.de
Abstract
We present a novel omnivision-based robot localiza-
tion approach which utilizes the Monte Carlo Lo-
calization (MCL) [2], a Bayesian filtering technique
based on a density representation by means of par-
ticles. The capability of this method to approximate
arbitrary likelihood densities is a crucial property for
dealing with highly ambiguous localization hypotheses
as are typical for real-world environments. We show
how omidirectional imaging can be combined with the
MCL-algorithm to globally localize and track a mobile
robot given a taught graph-based representation of the
operation area. In contrast to other approaches, the
nodes of our graph are labeled with both visual fea-
ture vectors extracted from the omnidirectional im-
age, and odometric data about the pose of the robot
at the moment of the node insertion (position and
heading direction). To demonstrate the reliability of
our approach, we present first experimental results
in the context of a challenging robotics application,
the self-localization of a mobile service robot acting
as shopping assistant in a very regularly structured,
maze-like and crowded environment, a home store.
1 Introduction and motivation
An interactive mobile service robot, e.g., a shopping
assistant, should be able to actively observe its oper-
ation area, to detect, localize, and contact potential
users, to interact with them continuously, and to ad-
equately offer its specific services. Typical service
tasks we want to solve in our PERSES (PERsonal
SErvice System) project are to guide the user to de-
sired areas or articles within a home store (guidance
function) or to follow him as a mobile information
kiosk while continuously observing the user and his
behavior (companion function) (see [3]). To accom-
modate the challenges that arise from the specifics of
our interaction-oriented scenario and the character-
istics of the operation area, a very regularly struc-
*
Supported by a Thuringian Ministry of Science, Research,
and Art Grant (PERSES & SERROKON-Projects)
100 m
45 m
Figure 1: (Top) Location plan of our experimen-
tal area, a large home store in Erfurt (toom Bau-
Markt). The topology of the store is characterized by
many similar, long hallways of equal width. Because
of their very regular structure, most of the hallways
can be distinguished only visually. (Bottom) exem-
plary appearance of three hallways which can not be
distinguished by distance sensors (sonar, laser) be-
cause of identical geometrical features. The hallways
and racks, however, show very characteristic views,
which allow a vision-based self-localization.
tured, maze-like and crowded environment, we place
special emphasis on vision-based methods for both
human-robot interaction and robot navigation. The
motivation for this is outlined in the following:
Functional and economical advantages: Mean-
while, vision systems have become available as very
powerful universal sensor systems with a good price-
performance ratio such that they can be successfully
utilized in a great number of robotics tasks - both
in human-robot interaction and autonomous naviga-
tion. Therefore, our low-cost prototype of a mobile
and interactive shopping assistant currently under
development will be equipped with an universally us-
able omnidirectional vision-system instead of an ex-
pensive laser rangefinder, which shows a number of
limitations in human-robot interaction and naviga-
Proceedings of the 2002 IEEE/RSJ
Intl. Conference on Intelligent Robots and Systems
EPFL, Lausanne, Switzerland • October 2002
0-7803-7398-7/02/$17.00 ©2002 IEEE 256