A Network of Sensors Based Framework for Automated Visual Surveillance Ruth Aguilar-Ponce, Ashok Kumar, J. Luis Tecpanecatl-Xihuitl and Magdy Bayoumi Center for Advanced Computer Studies University of Louisiana at Lafayette PO Box 4330, Lafayette, LA USA 70504-4330 ak@cacs.louisiana.edu Abstract This paper presents an architecture for sensor based, distributed, automated scene surveillance. The goal of the work is to employ wireless visual sensors, scattered in an area, for detection and tracking of objects of interest and their movements through application of agents. The architecture consists of several units known as Object Processing Units that are wirelessly connected in a cluster fashion. Cluster-heads communicate with the Scene Processing Units which are responsible for analyzing all the information sent by the former. Object detection and tracking is performed by cooperative agents, named as Region and Object Agents. The area under surveillance is divided into several sub-areas. One camera is assigned to each sub-area. A Region Agent is responsible for monitoring a given sub- area. First, a background subtraction is performed on the scene taken by the camera. Then, a computed foreground mask is passed to the Region Agent, which is responsible for creating Object Agents dedicated to tracking detected objects. Object detection and tracking is done automatically and is performed on the Object Processing unit. The tracking information and foreground mask are sent to a Scene Processing Unit that analyzes this information and determines if a threat pattern is present at the scene and performs appropriate action Key words: cooperative agents, sensor network, object detection, image understanding 1. Introduction and Related Work The changing world scenario where the security is at risk for several factors including industrial