UAV Video Coverage Quality Maps and Prioritized Indexing for
Wilderness Search and Rescue
Bryan S. Morse, Cameron H. Engh, and Michael A. Goodrich
Department of Computer Science
Brigham Young University
Provo, Utah, United States
Email: morse@byu.edu
Abstract—Video-equipped mini unmanned aerial vehicles
(mini-UAVs) are becoming increasingly popular for surveil-
lance, remote sensing, law enforcement, and search and rescue
operations, all of which rely on thorough coverage of a target
observation area. However, coverage is not simply a matter
of seeing the area (visibility) but of seeing it well enough to
allow detection of targets of interest, a quality we here call
“see-ability”. Video flashlights, mosaics, or other geospatial
compositions of the video may help place the video in context
and convey that an area was observed, but not necessarily
how well or how often. This paper presents a method for
using UAV-acquired video georegistered to terrain and aerial
reference imagery to create geospatial video coverage quality
maps and indices that indicate relative video quality based
on detection factors such as image resolution, number of
observations, and variety of viewing angles. When used for
offline post-analysis of the video, or for online review, these
maps also enable geospatial quality-filtered or prioritized non-
sequential access to the video. We present examples of static
and dynamic see-ability coverage maps in wilderness search-
and-rescue scenarios, along with examples of prioritized non-
sequential video access. We also present the results of a
user study demonstrating the correlation between see-ability
computation and human detection performance.
Keywords-unmanned aerial vehicles, wilderness search and
rescue, coverage quality maps, video indexing
I. I NTRODUCTION
Small lightweight mini-UAVs with 5–8 foot wingspans
have seen increased use recently for aerial sensing due to
their lower cost and ease of deployment. When equipped
with a video camera and transmitter, these mini-UAVs can be
used for surveillance, remote sensing, law enforcement, and
search and rescue operations, all of which require rapid and
thorough coverage of a target area. However, because of their
lightweight nature, these aerial sensing platforms are highly
unstable and easily buffeted by wind, and the operator’s
intentions may not always correspond to the actual flight
path. This makes it difficult for operators or video analysts to
correctly determine what spatial areas were observed during
a flight or sequence of multiple flights.
In addition to covering the target area, it is also essential
to maintain sufficient resolution to allow human operators to
accomplish their task. Since the altitude and orientation of
the plane are highly variable due to wind or other factors,
so too is the resolution of the resulting video. As the plane
banks to one side or the other, even an otherwise downward-
pointing camera may end up seeing areas far away and at an
oblique angle. This is compounded in varying terrain since
the UAV’s height above ground may change rapidly even
while maintaining constant altitude. One can try to maintain
a consistent height above ground either manually or through
automated means, but this is still subject to the limitations
of the plane’s ability to climb or safely descend. Some flight
paths, especially in difficult terrain, may make a low-altitude
pass over the target area then maneuver to make another
pass, providing only periodically usable video.
Our work in this area has focused on using mini-UAVs
to assist in Wilderness Search and Rescue (WiSAR) oper-
ations [1]. Field trials [2] tell us that it is often difficult to
tell what areas have been searched well. This assessment
is an essential component of search-and-rescue applications
because it is basically a prioritized search, focusing on the
regions most likely to include the missing person. Also
important to this task is the ability to efficiently review
previously acquired video, perhaps in response to a search
observation or during post hoc offline review. This can be
made more efficient by providing users with the ability to
intelligently access search video not only by georeferenced
indexing but by coverage quality as well, allowing users to
directly access usable observations of a specified target area.
Assessing the usability and coverage of aerial video is a
matter not only of whether the plane’s camera could see a
point but how well it saw it. Once the video is georegistered
to the underlying terrain, determining whether the camera
saw specific points is a simple matter of viewing geometry,
what we typically think of as “visibility”. But visibility-
based coverage alone isn’t enough to determine how useful
the video is—one must consider the viewing resolution as
well as the number of times seen, the variation of viewing
angle (which can often play a role in detection), etc. We call
this latter quality “see-ability”.
This paper presents a method for creating coverage quality
maps based on see-ability that convey not only the video
coverage of each part of a target area but also how useful
that video information is for the person viewing it (Figure 1).
Such coverage maps are useful for post hoc evaluation of
the search, for planning either during or between flights, and
for coordination with other team members.
978-1-4244-4893-7/10/$25.00 © 2010 IEEE 227