26 Calibrating a Wide-Area Camera Network with Non-Overlapping Views Using Mobile Devices THOMAS KUO, ZEFENG NI, SANTHOSHKUMAR SUNDERRAJAN, and B. S. MANJUNATH, University of California, Santa Barbara In a wide-area camera network, cameras are often placed such that their views do not overlap. Collaborative tasks such as tracking and activity analysis still require discovering the network topology including the extrinsic calibration of the cameras. This work addresses the problem of calibrating a fixed camera in a wide-area camera network in a global coordinate system so that the results can be shared across calibrations. We achieve this by using commonly available mobile devices such as smartphones. At least one mobile device takes images that overlap with a fixed camera’s view and records the GPS position and 3D orientation of the device when an image is captured. These sensor measurements (including the image, GPS position, and device orientation) are fused in order to calibrate the fixed camera. This article derives a novel maximum likelihood estimation formulation for finding the most probable location and orientation of a fixed camera. This formulation is solved in a distributed manner using a consensus algorithm. We evaluate the efficacy of the proposed methodology with several simulated and real-world datasets. Categories and Subject Descriptors: I.4.8 [Image Processing and Computer Vision]: Scene Anal- ysis—Sensor fusion; C.2.4 [Computer-Communication Networks]: Distributed Systems—Distributed applications General Terms: Algorithms, Design, Experimentation Additional Key Words and Phrases: Geo-calibration, mobile devices, multimodal sensors, GPS and orienta- tion measurements, consensus algorithm ACM Reference Format: Thomas Kuo, Zefeng Ni, Santhoshkumar Sunderrajan, and B. S. Manjunath. 2014. Calibrating a wide-area camera network with non-overlapping views using mobile devices. ACM Trans. Sensor Netw. 10, 2, Article 26 (January 2014), 24 pages. DOI: http://dx.doi.org/10.1145/2530284 1. INTRODUCTION With the recent advances in technology and the availability of cheap network-enabled cameras, there is an opportunity for smart camera networks for applications such as detection, tracking, pose and behavior analysis [Matei et al. 2011; Aghajan and Cavallaro 2009] and modeling and visualizing human activities [Sankaranarayanan et al. 2008]. Many of these methods require that the cameras are initially calibrated. In the real world, algorithms implemented for camera networks need to also be mindful of the limited communication bandwidth, processor speed, memory size, and consumed power. These constraints prioritize the development of distributed This work is supported by ONR grants #N00014-10-1-0478 and #N00014-12-1-0503. The authors would also like to acknowledge the support of ONR/DURIP grant #N00014-08-1-0791 that enabled UCSB’s camera network, SCALLOPSNet. Authors’ addresses: T. Kuo, Z. Ni, S. Sunderrajan, and B. S. Manjunath, Electrical and Computer Engineering Department, University of California, Santa Barbara, CA 93106; email: thekuo@ece.ucsb.edu. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701 USA, fax +1 (212) 869-0481, or permissions@acm.org. c 2014 ACM 1550-4859/2014/01-ART26 $15.00 DOI: http://dx.doi.org/10.1145/2530284 ACM Transactions on Sensor Networks, Vol. 10, No. 2, Article 26, Publication date: January 2014.