Thesis Topic: Relighting Unstructured Lumigraphs Using Texture Synthesis George Drettakis and Sylvain Lefebvre November 23, 2006 Since the introduction if image-based rendering, one major limitation has been the inability to mod- ify lighting, since it is captured in the initial photographs/video used to created the renderings. Much work has been done in relighting nonetheless: One current trend seems to be more in the setup of complex multi-projector/camera systems (see for example [CL05]), or the various light-stage deriva- tives [WGT + 05]. Another approach involves having detailed geometry of the scenes (e.g., either scanned [MG97], or disparity-map type depth info [MKL + 02]). The overall goal of this thesis is to relight unstructured lumigraphs: i.e., take a small number of images (10-20), calibrate the pictures and create a simple geometric proxy, treat the input images so that they can be relit, and have interactive relighting of the scenes using an unstructured lumigraph [BBM + 01]. We limit ourselves to outdoor scenes for now. The “killer demo” would be to take a set of photos of an outdoor scene, and show that you can change the lighting from morning to evening interactively (for example a scene with a small tree and bushes around). Recent advances in intrinsic images [TFA03, Wei01], and especially those based on the properties of outdoor lighting which do not require multiple illumination conditions [FHD02, FDL04], could mean that we can extract lighting/reflectance image pairs from single images. These techniques give relatively good shadow removal (at least for cast shadows), which could be applied to all the input images. Shadow “mattes” for each image can also be extracted. The quality of these shadow removal techniques however are not always optimal. Most of the methods use some error-optimization approach. It is thus easy to identify the problematic regions, and develop a new approach which combines the advantages of texture synthesis with optimization to improve the results. In particular, regions in shadow may exhibit a low signal to noise ratio. This noise cannot be removed by a simple multiplicative illumination term. Some shadows may also not be correctly captured and visual artifacts will appear, for instance at shadow boundaries. One idea to avoid these issues is to perform a constrained texture synthesis in these areas, using lit areas as an input exemplar [WL00, HJO + 01]. Note that similarly we can use in-shadowed areas as an exemplar to synthesize shadows under new lighting conditions. Contrary to in-painting approaches where the constraint is only defined at the boundaries of the regions to be filled, here we can use the in-shadowed region content as a guide to the synthesis algorithm. We can also hope to exploit information from multiple viewpoints to improve synthesis quality. One major challenge is to reconstruct local orientation and scale information necessary for the syn- thesis process, as well as some position and normal information for relighting. Since we have multiple calibrated images at our disposal we can apply simple depth-from-stereo images to create “clouds of points” and determine some normal information. This data can be maintained in volumetric form, and appropriate coding/access techniques can be applied, such as multi-resolution data-structures [BD02] or compact hash tables [LH06]. Evidently, the position and normal information will not be complete. Tex- ture synthesis type approaches can be used to provide the missing information. While some work as been done for surface hole-filing [SACO04], we will also consider in-volume synthesis for complex data such as foliage. 1