Auton Robot
DOI 10.1007/s10514-015-9539-8
Active target search for high dimensional robotic systems
Sina Radmard
1
· Elizabeth A. Croft
1
Received: 17 July 2015 / Accepted: 11 December 2015
© Springer Science+Business Media New York 2015
Abstract When a robotic visual servoing/tracking system
loses sight of the target, the servo fails due to loss of input.
To resolve this problem a search method, namely a lost target
search (LTS) which will generate efficient actions to bring
the target back into the camera field of view (FoV) as soon
as possible, is required. For high dimensional platforms, like
a camera-mounted manipulator or an eye-in-hand system,
such a search must address the difficult challenge of gen-
erating efficient actions in an online manner while avoiding
kinematic constraints. In this work, we utilize the latest avail-
able information from the target just prior to leaving the FoV
to initiate an optimal online search. We explain various fea-
tures of our overall LTS algorithm and provide simulation
comparisons with common methods existing in the litera-
ture. Finally, we implement and demonstrate the capabilities
of our general algorithm on a laboratory scale 7 degree of
freedom (DoF) eye-in-hand system tracking a fast moving
target.
Keywords Lost target search · High dimensional robot ·
Online sensor planning
Electronic supplementary material The online version of this
article (doi:10.1007/s10514-015-9539-8) contains supplementary
material, which is available to authorized users.
B Sina Radmard
sradmard@interchange.ubc.ca
Elizabeth A. Croft
elizabeth.croft@ubc.ca
1
CARIS Laboratory, Department of Mechanical Engineering,
University of British Columbia, Vancouver, Canada
1 Introduction
Having a camera as the main, or the only, feedback sen-
sor of a robotic platform has become widely appealing
in areas like surveillance, industrial manipulation, rescue
robots, soccer robots, planetary rovers, etc. (Kragic and
Vincze 2009). These various applications for surveillance,
manipulation, rescue and servoing share the common task of
object tracking. Such a task involves object detection, tra-
jectory estimation and hardware manipulation, all of which
depend on some visual cues of the target.
In the context of a single mounted camera platform servo-
ing to, or tracking, an object, most research has focused on
maintaining the target within the camera FoV (Nelson and
Khosla 1995; Chesi et al. 2003; Murrieta-Cid et al. 2005;
Panagou and Kumar 2014). However, due to factors such as
limited camera FoV, system constraints, occlusions, and poor
lighting conditions, maintaining visibility is not always pos-
sible. For example, during visual servoing in Kragi (2001), if
the target leaves the FoV or is occluded, a search is launched
only in the image space until the target is found or the time
spent on searching passes a predefined time constraint. In
other words, if the target does not reappear in a predictable
and narrowly localized site, such algorithms will fail. The
ability to robustly look for and find truly “lost” targets is a
natural next step to improve robot autonomy in wide range
of applications.
In this paper we present a fast and efficient online
planner—an adaptive planner that updates and replans as
new information arrives—to search for a moving target when
a high dimensional camera mounted robotic platform loses
track of the moving object. Without loss of generality, we
consider a camera-mounted manipulator—a 7-DoF eye-in-
hand system—as our exemplar high dimensional robotic
platform while the target moves in 3D as depicted in Fig. 1.
123