Correspondence to: harbinder.ece@gmail.com Recommended for acceptance by Anjan Dutta and Carles Sánchez http://dx.doi.org/10.5565/rev/elcvia.1126 ELCVIA ISSN: 1577-5097 Published by Computer Vision Center / Universitat Autonoma de Barcelona, Barcelona, Spain Electronic Letters on Computer Vision and Image Analysis 16(2): 13-16; 2017 Detail Enhanced Multi-Exposure Image Fusion Based on Edge Preserving Filters Harbinder Singh Department of Electronics and Communication Engineering, Jaypee University of Information Technlogy (JUIT), P.O. Waknaghat, Teh Kandaghat, Distt. Solan, PIN- 173234, India Advisor: Dr. Sunil Vidya Bhooshan Date and location of PhD thesis defense: 01 September 2016, JUIT, Waknaghat, Solan, India. Received 15th Jul 2017; accepted 8th Oct 2017 Abstract Recent computational photography techniques play a significant role to overcome the limitation of standard digital cameras for handling wide dynamic range of real-world scenes contain brightly and poorly illuminated areas. In many of such techniques [1, 2, 3], it is often desirable to fuse details from images captured at different exposure settings. One such technique is High Dynamic Range (HDR) imaging that provides a solution to recover radiance maps from photographs taken with conventional imaging equipment. One of the long-standing challenges in HDR imaging technology is the limited Dynamic Range (DR) of conventional display devices and printing technology. Due to which these devices are unable to reproduce full DR. Although DR can be reduced by using a tone-mapping, but this comes at an unavoidable trade-off with increased computational cost. Therefore, it is desirable to maximize information content of the synthesized scene from a set of multi-exposure images without computing HDR radiance map and tone- mapping. This thesis attempts to develop a novel detail enhanced multi-exposure image fusion approach, which exploits the edge preserving capability of adaptive filters. Introduction It is impossible to capture the entire DR of the real world scene with single exposure. This is due to the limited capabilities of Charge Coupled Device (CCD) or Complementary Metal Oxide Semiconductor (CMOS) sensor chip. Human eye is sensitive to relative rather than absolute luminance values and can observe both indoor and outdoor details simultaneously while digital camera cannot record indoor and outdoor luminance variations in single snapshot. This is because the eye adapts locally as we scan the different regions of the scene and can adapt 10 orders of magnitude of intensity variations in the scene [4], while standard digital cameras are unable to record luminance variation in the entire scene. To circumvent this problem, modern digital photography offers the concept of exposure time variation to capture details in very dark or extremely bright regions, which control the amount of light allowed to fall on the sensor. Currently, there are many applications that involve variable exposure photography to determine the details to be captured optimally in the photographed scene. The intention of exposure setting determination is to control charge capacity of the CCD or CMOS. In exposure fusion, compositing is done on the pixel intensity values rather than irradiance values. Approaches proposed in this thesis do not care about the exposure times and CRF, which is required to