A. Sanfeliu et al. (Eds.): CIARP 2004, LNCS 3287, pp. 84–91, 2004.
© Springer-Verlag Berlin Heidelberg 2004
Decision Fusion for Object Detection and Tracking
Using Mobile Cameras
Luis David López Gutiérrez and Leopoldo Altamirano Robles
National Institute of Astrophysics Optics and Electronics, Luis Enrique Erro No 1,
Santa Maria Tonantzintla, Puebla, 72840 México
luis_david@ccc.inaoep.mx, robles@inaoep.mx
Abstract. In this paper an approach to the automatic target detection and track-
ing using multisensor image sequences with the presence of camera motion is
presented. The approach consists of three parts. The first part uses a motion
segmentation method for targets detection in the visible images sequence. The
second part uses a background model for detecting objects presented in the in-
frared sequence, which is preprocessed to eliminate the camera motion. The
third part combines the individual results of the detection systems; it extends
the Joint Probabilistic Data Association (JPDA) algorithm to handle an arbitrary
number of sensors. Our approach is tested using image sequences with high
clutter on dynamic environments. Experimental results show that the system de-
tects 99% of the targets in the scene, and the fusion module removes 90% of the
false detections.
1 Introduction
The task of detecting and tracking regions of interest automatically is a fundamental
problem of computer vision; these systems have a great importance in military and
surveillance applications. A lot of work has already been carried out on the detection
of multiple targets. However, detection and tracking of small, low contrast targets in a
highly cluttered environment still remains a very difficult task.
The most critical factor of any system for automatic detection is its ability to find
an acceptable compromise between the probability of detection and the number of
false target detection. These types of errors can generate false alarms and false rejec-
tions. In a single sensor detection system, unfortunately, reducing one type of error
comes at the price of increase the other type. One way to solve this problem is to use
more than one sensor and to combine the data obtained by these different expert sys-
tems. In this paper we propose an approach to solve the automatic detection problem
of objects using decision fusion, our principal contribution is improve the target
detection and tracking results without specialization of the algorithms for a particular
task; the approach was tested on a set of image sequences obtained from mobile cam-
eras.
The paper is organized as follows. Section 2 introduces the models which are con-
sidered, and briefly they are described. Section 3 shows an overview of the approach.
Sections 4 and 5 describe the algorithms used to detect objects of interest in visible
and infrared image sequences respectively. Section 6 describes the method for com-
bining the results obtained by the two algorithms. Several results that validate our
approach are reported in section 7, and finally section 8 contains concluding remarks.