Real-timeTerascaleImplementationofTele-immersion Nikhil Kelshikar 1 , Xenophon Zabulis 1 , Jane Mulligan 4 , Kostas Daniilidis 1 , Vivek Sawant 2 , Sudipta Sinha 2 , Travis Sparks 2 , Scott Larsen 2 , Herman Towles 2 , Ketan Mayer-Patel 2 , Henry Fuchs 2 , John Urbanic 3 , Kathy Benninger 3 , Raghurama Reddy 3 , and Gwendolyn Huntoon 3 1 University of Pennsylvania 2 University of North Carolina at Chapel Hill 3 Pittsburgh Supercomputing Center 4 University of Colorado at Boulder Abstract. Tele-immersion is a new medium that enables a user to share a virtual space with remote participants, by creating the illusion that users at geographi- cally dispersed locations reside at the same physical space. A person is immersed in a remote world, whose 3D representation is acquired remotely, then transmit- ted and displayed in the viewer’s environment. Tele-immersion is effective only when the three components, computation, transmission, and rendering - all oper- ate in real time . In this paper, we describe the real-time implementation of scene reconstruction on the Terascale Computing System at the Pittsburgh Supercom- puting Center. 1 Introduction Tele-immersion enables users at geographically distributed locations to collaborate in a shared space, which integrates the environments at these locations. In an archetypical tele-immersion environment as proposed at the origin of this project [8, 4], a user wears polarized glasses and a tracker capturing the head’s pose. On a stereoscopic display, a remote scene is rendered so that it can be viewed from all potential viewpoints in the space of the viewer. To achieve this, an architecture that enables real-time view- independent 3D scene acquisition, transmission, and rendering in a real-time fashion is proposed (see Fig. 1). Most of the computational challenges are posed in the 3D scene acquisition. This stage deals with the association of pixels with the 3D coordinates of the world points they depict, in a view independent coordinate system. This association can be based on finding pixel correspondences between a pair of images. The derived correspondences constitute the basis of the computation of a disparity map, from which the depth of depicted world points and, in turn, their coordinates can be estimated. The solution of the correspondence problem is associated with many challenging open topics, such as establishing correspondences for pixels that reside in textureless im- age regions, detecting occlusions, and coping with specular illumination effects.This involves a trade-off between being conservative and producing many holes in the depth or being lenient, covering everything with the cost of having outliers. Contact person: Nikhil Kelshikar, nikhil@grasp.cis.upenn.edu, University of Pennsylvania, GRASP Laboratory, 3401 Walnut St., Philadelphia, PA 19104-6228. All three sites acknowl- edge financial support by the NSF grant IIS-0121293. P.M.A. Sloot et al. (Eds.): ICCS 2003, LNCS 2660, pp. 33-42, 2003. Springer-Verlag Berlin Heidelberg 2003