Hindawi Publishing Corporation
Advances in Artificial Intelligence
Volume 2010, Article ID 765876, 20 pages
doi:10.1155/2010/765876
Research Article
Bootstrap Learning and Visual Processing Management on
Mobile Robots
Mohan Sridharan
Department of Computer Science, Texas Tech University, Lubbock, TX 79409, USA
Correspondence should be addressed to Mohan Sridharan, mohan.sridharan@ttu.edu
Received 1 October 2009; Accepted 10 November 2009
Academic Editor: Alfons Schuster
Copyright © 2010 Mohan Sridharan. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
A central goal of robotics and AI is to enable a team of robots to operate autonomously in the real world and collaborate with
humans over an extended period of time. Though developments in sensor technology have resulted in the deployment of robots
in specific applications the ability to accurately sense and interact with the environment is still missing. Key challenges to the
widespread deployment of robots include the ability to learn models of environmental features based on sensory inputs, bootstrap
off of the learned models to detect and adapt to environmental changes, and autonomously tailor the sensory processing to
the task at hand. This paper summarizes a comprehensive effort towards such bootstrap learning, adaptation, and processing
management using visual input. We describe probabilistic algorithms that enable a mobile robot to autonomously plan its actions
to learn models of color distributions and illuminations. The learned models are used to detect and adapt to illumination changes.
Furthermore, we describe a probabilistic sequential decision-making approach that autonomously tailors the visual processing to
the task at hand. All algorithms are fully implemented and tested on robot platforms in dynamic environments.
1. Introduction
An open grand challenge in the field of robotics is to
enable widespread deployment of robots in the real world,
where they can operate autonomously and collaborate with
humans. Addressing this grand challenge would in turn
require answers to the following major questions.
(i) Autonomous Learning and Adaptation. How to enable
a robot to autonomously learn models of environ-
mental features based on sensory input, detect envi-
ronmental changes, and adapt the learned models in
response to such changes?
(ii) Processing Management. Given multiple sources of
information, which bits of information should be
processed, and what processing should be performed
in order to achieve a desired goal reliably and
efficiently?
(iii) Multiagent Coordination. How to enable a team of
robots, each with possibly different capabilities and
constraints, to collaborate robustly towards a shared
objective despite noisy sensing and communication?
In this paper, the focus is primarily on developing proba-
bilistic methods for Autonomous Learning and Adaptation,
and for Processing Management. We propose probabilistic
methods that enable a robot to use sensory inputs to
learn environmental models and respond to environmental
changes. Furthermore, given multiple sources of informa-
tion, the robot autonomously tailors the sensory processing
to the task at hand.
Mobile robots that sense and interact with the environ-
ment through a set of sensors and actuators are characterized
by the following features and requirements.
(i) Features
(a) Partial Observability. The true state of the world
is not directly observable. The robot can only
update its belief, that is, an estimate of the world
state by executing actions and observing the
noisy outcomes.
(b) Nondeterministic Actions and Observations. The
outcome of executing actions or making obser-
vations based on sensory input is nondeter-
ministic, that is, actions and observations are
unreliable.