MULTIMODAL 3-D TRACKING AND EVENT DETECTION VIA THE PARTICLE FILTER Dmitry Zotkin, Ramani Duraiswami, Larry S. Davis Perceptual Interfaces and Reality Laboratory, UMIACS University of Maryland, College Park, MD 20742 {dz,ramani,lsd}@cs.umd.edu ABSTRACT Determining the occurrence of an event is fundamental to developing systems that can observe and react to them. Of- ten, this determination is based on collecting video and/or audio data and determining the state or location of a tracked object. We use Bayesian inference and the particle filter for tracking moving objects, using both video data obtained from multiple cameras and audio data obtained using arrays of microphones. The algorithms developed are applied to determining events arising in two fields of application. In the first, the behavior of a ying echolocating bat as it ap- proaches a moving prey is studied, and the events of search, approach and capture are detected. In a second application we describe detection of turn-taking in a conversation be- tween possibly moving participants recorded using a smart video conferencing setup. 1. INTRODUCTION An event is characterized by some typical change in the state of some object. Robust detection of events thus re- quires robust tracking of an object’s state. Typically this state includes the object’s position, either in an absolute frame, or relative to some other object. Further, to detect an event change the detecting system must focus its atten- tion on the object location (e.g., the position of a human) at a given time. Systems that seek to recognize events in applications such as surveillance, creating perceptually im- mersive realities, or HCI, must thus be able to focus on par- ticular object locations in order to obtain a better view of the actions taking place. This focusing can involve zoom and focus of an active camera, enhanced audio from the spot ob- tained via a microphone array beamforming procedure, or some other attention focusing mechanism. All these require robust tracking of the position of an object. The tracking algorithm might require a priori knowledge of the nature of the actions that are of interest, and it would be desirable to be able to incorporate data from any available active sen- sors. We develop a multimodal sensor fusion framework based on particle filters and apply it to tracking and event detec- tion using audio and video modalities. We show that the performance of the multimodal tracker is superior to that of unimodal tracking, and that availability of information from a complementary modality simplifies the event detec- tion task. The developed algorithm is an application of sequential Monte-Carlo methods (also known as particle filters) to 3- D tracking using two calibrated cameras and a microphone array. Particle filters were introduced to the vision com- munity in the form of the CONDENSATION algorithm [1]. Improvements of a technical nature to the condensation al- gorithm were provided by Isard and Blake [2] (importance sampling), MacCormick and Blake [5], Li and Chellappa [11], and Philomin et al [10]. The algorithm has seen appli- cation to tracking people in video, and face tracking. The reason these algorithms have attracted much interest is that they offer a framework for dynamic state estimation where the underlying probability density functions (pdfs) need not be Gaussian, and state and measurement equations can be nonlinear – situations that are commonly encoun- tered in vision. The method is relatively robust to noise, and recovers from tracking misses in intermediate frames. In addition, they are relatively simple to implement, and al- low one to conveniently combine multiple feature types in the same tracker. This paper is arranged as follows. In section 2 the no- tation and the basic equations for the video tracker, the au- dio tracker, and the particle filter are introduced. In sec- tion 3 we introduce two event detection problems for which the multimodal action recording setup is available (a ying bat in a dark room and multiple speakers in an office en- vironment). In section 4, we study the performance of our tracking algorithm on Monte Carlo simulations and present results of tracking and event detection for real and simu- lated data in both bat and videoconferencing experiments. Section 5 concludes the paper with an assessment of the al- gorithm and a discussion of future work needed to achieve better performance on the tracking problem. 2. FORMULATION We describe the particle filter in general terms first. Then the motion model and the posterior probability distributions