Vehicle Detection from Aerial Imagery
Joshua Gleason, Ara V. Nefian, Xavier Bouyssounousse, Terry Fong and George Bebis
Abstract—Vehicle detection from aerial images is becoming
an increasingly important research topic in surveillance, traffic
monitoring and military applications. The system described in
this paper focuses on vehicle detection in rural environments
and its applications to oil and gas pipeline threat detection.
Automatic vehicle detection by unmanned aerial vehicles (UAV)
will replace current pipeline patrol services that rely on pilot
visual inspection of the pipeline from low altitude high risk
flights that are often restricted by weather conditions. Our
research compares a set of feature extraction methods applied
for this specific task and four classification techniques. The
best system achieves an average 85% vehicle detection rate and
1800 false alarms per flight hour over a large variety of areas
including vegetation, rural roads and buildings, lakes and rivers
collected during several day time illuminations and seasonal
changes over one year.
I. INTRODUCTION
Vehicles and heavy digging equipment in particular pose
a potentially catastrophic threat to the vast network of oil
and gas pipelines in rural areas. Current aerial patrol pilots
determine these threats while maintaining the airplanes at a
safe altitude above the ground. This task becomes particu-
larly difficult in heavy weather conditions and often reduces
the frequency of the surveillance flights.
The system described in this paper (Figure 1) is an
attempt to allow unmanned airborne vehicles (UAV) flying
at higher altitude to automatically detect ground vehicles in
rural areas. Our approach uses optical images captured by
a nadir looking commercial camera installed on the airplane
wing and determines the vehicles location within each of the
captured images. The main challenges of the system consist
in dealing with 3D image orientation, image blur due to
airplane vibration, variations in illumination conditions and
seasonal changes.
There is a vast literature on vehicle detection from aerial
imagery. Zhao and Nevatia [12] explore a car recognition
method from low resolution aerial images. Hinz [6] discusses
a vehicle detection system which attempts to match vehicles
against a 3D-wireframe model in an adaptive “top-down”
manner. Kim and Malik [7] introduce a faster 3D-model
based detection using a probabilistic line feature grouping
to increase performance and detection speed.
The vehicle detection system described in this paper uses
nadir aerial images and compares the experimental results for
This work was not supported by PRCI
Joshua Gleason and George Bebis are with the University of Nevada
Reno, gleaso22@gmail.com and bebis@cse.unr.edu
Ara Nefian is with the Carnegie Mellon University and NASA Ames
Research Center, ara.nefian@nasa.gov
Xavier Bouyssounouse and Terry Fong are with the NASA Ames
Research Center, xavier.bouyssounouse@nasa.gov and
terry.fong@nasa.gov
several feature extraction techniques with strong discriminant
power over vehicles and background, and a set of statistical
classifiers including nearest neighbor, random forests and
support vector machines. The method described in this paper
analyzes each location in an image to determine the target
presence. Due to the large number of analyzed location
and real time requirements the method presented here starts
with a fast detection stage that looks for man-made objects
and rejects most of the background. The second stage of
the algorithm refines the detection results using a binary
classifier for vehicle and background.
Representation
Classification
Color-Based Refinement
Feature Density Estimation
Feature Detection
Fast Detection
Target Classification
Image File, Video File, Camera
Target Clustering
Fig. 1. The overall system.
The paper is organized as follows. Section II describes
the fast detection stage, Section III describes the feature
extraction and classification techniques, Section IV makes
a quantitative comparison of the techniques, and finally
Section V presents the conclusion of this work and gives
directions for future research.
II. FAST DETECTION
The first stage of the algorithm inspects every image
location at several scales and efficiently eliminates the large
majority of the background areas. The algorithm begins by
quickly detecting features using the Harris corner detector.
Next, areas containing a high density of features are detected.
The third step clusters heavily overlapping responses. In the
final step, color-based properties are used to further refine
the results.
2011 IEEE International Conference on Robotics and Automation
Shanghai International Conference Center
May 9-13, 2011, Shanghai, China
978-1-61284-385-8/11/$26.00 ©2011 IEEE 2065