IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 31, NO. 3, JUNE 2001 341
Adaptive Tracking Control of a Wheeled Mobile
Robot via an Uncalibrated Camera System
Warren E. Dixon, Member, IEEE, Darren M. Dawson, Senior Member, IEEE, Erkan Zergeroglu, Member, IEEE,
and Aman Behal
Abstract—This paper considers the problem of position/orien-
tation tracking control of wheeled mobile robots via visual ser-
voing in the presence of parametric uncertainty associated with the
mechanical dynamics and the camera system. Specifically, we de-
sign an adaptive controller that compensates for uncertain camera
and mechanical parameters and ensures global asymptotic posi-
tion/orientation tracking. Simulation and experimental results are
included to illustrate the performance of the control law.
Index Terms—Adaptive control, visual-servoing, wheeled mobile
robot.
I. INTRODUCTION
A
S the demand increases for wheeled mobile robots
(WMRs) in settings that range from shopping centers,
hospitals, warehouses, and nuclear waste facilities, the need
for precise control of WMRs is clearly evident; hence, a
closed-loop sensor-based controller is required. Unfortunately,
due to the nonholonomic nature of the WMR and the standard
encoder hardware configuration (e.g., optical encoders mounted
on the actuators), the WMR Cartesian position is difficult to
accurately obtain. That is, the linear velocity of the WMR
must first be numerically differentiated from the position (i.e.,
by the backward difference algorithm) and then the nonlinear
kinematic model must be numerically integrated to obtain the
WMR Cartesian position. Since numerical differentiation/inte-
gration errors may accumulate over time, the accuracy of the
numerically calculated WMR Cartesian position may be com-
promised. An interesting approach to overcome this position
measurement problem is to utilize a vision system to directly
obtain the Cartesian position information required by the
controller (for an overview of the state-of-the-art in robot visual
servoing, see [7] and [18]). Specifically, a ceiling-mounted
Manuscript received March 12, 2000; revised January 9, 2001. This work
was supported in part by a Eugene P. Wigner Fellow and Staff Member at the
Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for the U.S.
Department of Energy under Contract DE-AC05-00OR22725. Additional sup-
port is provided by the U.S. National Science Foundation Grants DMI-9457967,
CMS-9634796, ECS-9619785, DMI-9813213, and EPS-9630167, DOE Grant
DE-FG07-96ER14728, a DOC Grant, and the Gebze Institute for Advanced
Technology. This paper was recommended by Associate Editor R. A. Hess.
W. E. Dixon is with the Robotics and Process Systems Division, Oak Ridge
National Laboratory, Oak Ridge, TN 37831 USA (e-mail: dixonwe@ornl.gov).
D. M. Dawson and A. Behal are with the Department of Electrical and Com-
puter Engineering, Clemson University, Clemson, SC 29634 USA.
E. Zergeroglu was with the Department of Electrical and Computer Engi-
neering, Clemson University, Clemson, SC 29634 USA. He is now with Optical
Fiber Solutions, Bell Laboratory Innovations, Lucent Technologies, Sturbridge,
MA 01566 USA.
Publisher Item Identifier S 1083-4419(01)05220-7.
camera system can be used to determine the WMR Cartesian
position without requiring numerical calculations. However, as
emphasized by Bishop et al. in [1], when a vision system is uti-
lized to extract information about a robot and the environment,
adequate calibration of the vision system is required. That is,
parametric uncertainty associated with the calibration of the
camera corrupts the WMR position/orientation information;
hence, camera calibration errors can result in degraded control
performance.
Despite the above motivation to incorporate visual informa-
tion in the control loop, most of the WMR research available in
literature which incorporates visual information in the overall
system seems to be concerned with vision-based navigation
(i.e., using visual information for trajectory planning). It also
seems that the state-of-the-art WMR research that specifically
targets incorporating visual information from an on-board
camera into the closed-loop control strategy can be found in
[5], [15], [21]. Specifically, in [15], Ma et al. incorporates the
dynamics of image curves obtained from a mobile camera
system in the design of stabilizing control laws for tracking
piecewise analytic curves. In [1], Espiau et al. proposed a
visual servoing framework and in [5], Samson et al. address
control issues in the image plane. For the most part, it seems
that previous visual-servoing WMR work has assumed that
the parametric uncertainty associated with the camera system
can been neglected. In contrast, it seems that visual servoing
research for robot manipulators has focused on the design
of controllers that account for uncalibrated camera effects as
well as uncertainty associated with the mechanical dynamics.
Specifically, in [10], Kelly designed a setpoint controller to
take into account uncertainties in the camera orientation to
achieve a local asymptotically stable result; however, the
controller required exact knowledge of the robot gravitational
term and restricted the difference between the estimated and
actual camera orientation to the interval ( 90 , 90 ). In
[1], Bishop and Spong developed an inverse dynamics-type,
position tracking control scheme (i.e., exact model knowledge
of the mechanical dynamics) with on-line adaptive camera
calibration that guaranteed global asymptotic position tracking;
however, convergence of the position tracking error required
the desired position trajectory to be persistently exciting. In
[16], Maruyama and Fujita proposed setpoint controllers for
the camera-in-hand configuration; however, the proposed
controllers required exact knowledge of the camera orientation
and assumed the camera scaling factors to be the same value
for both directions. In [11], Kelly et al. utilized a composite
velocity inner loop, image-based outer loop fixed-camera
1083–4419/01$10.00 © 2001 IEEE