Distributed Coalition Formation in Visual Sensor Networks: A Virtual Vision Approach Faisal Qureshi 1 and Demetri Terzopoulos 2,1 1 Dept. of Computer Science, University of Toronto, Toronto, ON, Canada, faisal@cs.toronto.edu 2 Computer Science Dept., University of California, Los Angeles, CA, USA, dt@cs.ucla.edu Abstract. We propose a distributed coalition formation strategy for collabora- tive sensing tasks in camera sensor networks. The proposed model supports task- dependent node selection and aggregation through an announcement/bidding/selection strategy. It resolves node assignment conflicts by solving an equivalent constraint satisfaction problem. Our technique is scalable, as it lacks any central controller, and it is robust to node failures and imperfect communication. Another unique aspect of our work is that we advocate visually and behaviorally realistic virtual environments as a simulation tool in support of research on large-scale camera sensor networks. Specifically, our visual sensor network comprises uncalibrated static and active simulated video surveillance cameras deployed in a virtual train station populated by autonomously self-animating pedestrians. The readily re- configurable virtual cameras generate synthetic video feeds that emulate those generated by real surveillance cameras monitoring public spaces. Our simulation approach, which runs on high-end commodity PCs, has proven to be beneficial because this type of research would be difficult to carry out in the real world in view of the impediments to deploying and experimenting with an appropriately complex camera network in extensive public spaces. Key words: Camera sensor networks, Sensor coordination and control, Distributed coalition formation, Video surveillance 1 Introduction Camera sensor networks are becoming increasingly important to next generation appli- cations in surveillance, in environment and disaster monitoring, and in the military. In contrast to current video surveillance systems, camera sensor networks are character- ized by smart cameras, large network sizes, and ad hoc deployment. 3 These systems lie at the intersection of machine vision and sensor networks, raising issues in the two fields that must be addressed simultaneously. The effective visual coverage of extensive areas—public spaces, disaster zones, and battlefields—requires multiple cameras to col- laborate towards common sensing goals. As the size of the camera network grows, it 3 Smart cameras are self-contained vision systems, complete with image sensors, power cir- cuitry, communication interfaces, and on-board processing capabilities [1].