Environment Classification for Indoor/Outdoor Robotic Mapping
Jack Collier
Defence R&D Canada – Suffield
Autonomous Intelligent Systems Section
PO Box 4000 Station Main
Medicine Hat, AB, T1A 8K6, Canada
jack.collier@drdc-rddc.gc.ca
Dr. Alejandro Ramirez-Serrano
Dept. of Mechanical & Manufacturing Engineering
University of Calgary
2500 University Drive NW
Calgary, AB, T2N 1N4, Canada
aramirez@ucalgary.ca
Abstract
We present a novel perception system for mapping of
indoor/outdoor environments with an Unmanned Ground
Vehicle (UGV). The system uses image classification tech-
niques to determine the operational environment of the
UGV (indoor or outdoor). Based on the classification re-
sults, the appropriate mapping system is then deployed.
Image features are extracted from video imagery and
used to train a classification function using supervised
learning techniques. This classification function is then
used to classify new imagery. A perception module observes
the classification results and switches the UGV’s percep-
tion system, according to current needs and available (reli-
able) data as the UGV transitions from indoors to outdoors
or vice versa. A terrain map that exploits GPS and Iner-
tial Measurement Unit (IMU) data is used when operating
outdoors, while a 2D laser based Simultaneous Localiza-
tion and Mapping (SLAM) technique is used when oper-
ating indoors. Globally consistent maps are generated by
transforming the indoor map data into the global reference
frame, a capability unique to this algorithm.
1 Introduction
The ability of a UGV to interact effectively with its en-
vironment depends on its ability to sense and interpret it, a
process called perception. UGV perception is traditionally
accomplished by combining range data from various sen-
sors with localization data to create a geometric world rep-
resentation that can be used to achieve a task. To increase
the capabilities of today’s UGVs, robot perception capabil-
ities must move beyond the purely geometric approach to
gain a greater situational awareness. Increased perception
capabilities will allow the UGV to better assess the envi-
ronment and operate more effectively in a wider range of
situations.
In general, state of the art UGVs are designed to oper-
ate within an assumed environment in which the parame-
ters and constraints are well known. If these assumptions
are valid, the UGV can operate effectively, but failure can
occur when the assumptions are incorrect. It is desirable to
have a perception system that can adapt to changes in its op-
erational environment, thus extending its effectiveness and
operational range.
This paper details an adaptive perception system through
the use of learning and vision algorithms. A system is im-
plemented that allows the UGV to adapt its mapping system
when transitioning between an indoor and outdoor environ-
ment. The system uses image based features from a colour
camera and supervised learning techniques to perform scene
classification, adapting based on the classification results.
In the case of the outdoor classification, a terrain mapping
system is deployed. This system uses GPS and IMU mea-
surements to provide an accurate pose estimate as well as
stereovision range data to determine the geometric charac-
teristics of the environment. When the UGV transitions to
an indoor environment, where GPS data is no longer avail-
able, the UGV employs a laser based SLAM system. In the
SLAM process, point features, extracted from a 2D laser
scan, are tracked as the robot moves and used to probabilis-
tically estimate the UGV’s pose and the landmark locations
estimates.
This paper is organized as follows. Section 2 provides an
overview of background technologies and algorithms rele-
vant to this work. Section 3 details the proposed perception
system. In Section 4, several experiments designed to vali-
date the proposed system are detailed followed by a review
of the proposed approach, a discussion of the obtained re-
sults and future work in Section 5.
2 Previous Work
Previously relevant research related to this work has fo-
cused in two main areas, scene classification techniques that
2009 Canadian Conference on Computer and Robot Vision
978-0-7695-3651-4/09 $25.00 © 2009 IEEE
DOI 10.1109/CRV.2009.6
276