Integrating Synthetic Objects Into Real Scenes Francisco Abad Emilio Camahort Roberto Vivó Sección de Informática Gráfica Departamento de Sistemas Informáticos y Computación Universidad Politécnica de Valencia 46020 Valencia, Spain {fjabad,camahort,rvivo}@dsic.upv.es Abstract This paper presents a methodology for integrating synthetic objects into real scenes. We take a set of photographs of the real scene and build a simple image-based model. We use high dynamic range images to build an accurate representation of the lighting in the scene. Then we insert a synthetic object into the model and compute its illumination and shading using the lighting information. Illumination changes produced by the synthetic object are also applied to real-scene objects located nearby. We show how easy it is to achieve photo-realistic results without specialized hardware. Our approach takes advantage of techniques like automatic camera calibration, high- dynamic range image capture and image-based lighting. Keywords Digital composition, image-based lighting, special effects, RADIANCE 1. INTRODUCTION Achieving photo-realistic synthetic images is one of the main objectives pursued since the beginning of Computer Graphics. The complexity of the objects that make up the real world is the main problem in attaining such realism [Foley93]. A family of techniques that are capable of creating photo-realistic images uses photographs to build synthetic models. Such models made of photographs allow processing and rendering of new synthetic views. These techniques are known as image-based rendering and several well-known computer graphics techniques follow this principle: textures, warping, morphing [Beier92], reflection mapping [Debevec01], and light fields [Levoy96]. One of the advantages of image-based modeling and rendering is the possibility of combining synthetic and real-world models. Such models can be applied to the production of images and animated sequences for cinema, advertising, virtual reality, augmented reality and other applications. Cinema has been the driving force of this kind of techniques. Optical compositing in early films has evolved into blue-screening, commonly used at present, not only in cinema but also in TV. Another critical application is the field of augmented reality. This technology is used to improve the sensorial perception of human operators. Application areas are telemedicine, microsurgery, entertainment and remote machine operation among others [Azuma97]. There are three problems related to the production of augmented reality systems [Sato99]: Geometry correspondence: the camera parameters of the virtual objects have to match the camera parameters of the real scene. Illumination correspondence: the virtual objects must be lit as if they were in the real scene; they also have to cast shadows. Temporal correspondence: the virtual object must move consistently with its environment. The technique presented in this work shares most of the requirements of augmented reality. Our main goal is to present a technique for integrating synthetic objects in a real scene. An additional goal is to obtain a result as realistic as possible. To simplify the problem, however, we focus on the generation of still images from static scenes. That is, we do not take into consideration the problem of guaranteeing temporal correspondence. The main problem of current systems is that they require a lot of user interaction. Typical approaches work by trial and error, thus requiring a lot of time and effort. An example is the definition of light-source parameters in the