Analysis of Ringing Artifact in Image Fusion Using Directional Wavelet Transforms Ashish V. Vanmali Tushar Kataria y , Samrudhha G. Kelkar z , Vikram M. Gadre x Dept. of Information Technology Dept. of Electrical Engineering Vidyavardhini’s C.O.E. & Tech. Indian Institute of Technology, Bombay Vasai, Mumbai, India – 401202 Powai, Mumbai, India – 400076 Abstract—In the field of multi-data analysis and fusion, image fusion plays a vital role for many applications. With inventions of new sensors, the demand of high quality image fusion algorithms has seen tremendous growth. Wavelet based fusion is a popular choice for many image fusion algorithms, because of its ability to decouple different features of information. However, it suffers from ringing artifacts generated in the output. This paper presents an analysis of ringing artifacts in application of image fusion using directional wavelets (curvelets, contourlets, non-subsampled contourlets etc.). We compare the performance of various fusion rules for directional wavelets available in literature. The experimental results suggest that the ringing artifacts are present in all types of wavelets with the extent of artifact varying with type of the wavelet, fusion rule used and levels of decomposition. Index Terms—Directional Wavelets, Image Fusion, Modified Structural Dissimilarity, Ringing Artifacts I. I NTRODUCTION Fusion of complementary information from different source images is known as image fusion. In this digital age, there is a huge influx of data captured from multiple camera setting and/or sensors of the same object or scene being imaged. Each image captured, thus exhibits different features of data, with varying amounts of details of the objects. Combining these shreds of information from different images becomes imper- ative, as it helps in defining the big picture. For example, in medical applications, fusing Computerized Tomography (CT), Magnetic Resonance Imaging (MRI), Functional Magnetic Resonance Imaging (fMRI), Positron Emission Tomography (PET) etc., helps in the diagnosis of a disease in a reliable, efficient and quick manner. In surveillance, use of visible and infrared (IR) images is a common practice. High dynamic range (HDR) imaging involves fusion of differently exposed low dynamic range (LDR) images. The objective of image fusion is find one image which has more information about the scene than any of the source im- ages. The input data for image fusion algorithms is generally of two types: • Images taken from a single sensor but with differ- ent parameters of the imaging apparatus. Examples in- clude multi-focus images, multi-exposure images, multi- temporal images etc. • Images taken from multiple sensors. Examples include near infrared (NIR) images, IR images, CT, MRI, PET, fMRI etc. We can broadly classify the image fusion techniques into four categories: 1) Component substitution based fusion algorithms [1]–[5] 2) Optimization based fusion algorithms [6]–[10] 3) Multi-resolution (wavelets and others) based fusion algo- rithms [11]–[15] and 4) Neural network based fusion algorithms [16]–[19]. Wavelet based multi-resolution analysis decouples data into low frequency (LF) and high frequency (HF) components at various scales. This allows for separate processing of LF and HF components, and gives more flexibility and freedom in designing better fusion algorithms. Also, the computational complexity is very low for wavelet analysis-synthesis filter banks. These advantages make wavelets popular for the image fusion applications. The wavelet based image fusion algo- rithms follow three simple steps: 1) Decompose source images into LF and HF coefficients to form wavelet pyramids. 2) Fuse LF and HF coefficients using the prescribed fusion rule to form a fused wavelet pyramid. 3) Take inverse transform of the fused coefficients to get the fused image. One of the simplest fusion rule in wavelet base fusion is mean-max fusion. In mean-max fusion, the detail coefficient with the highest magnitude among two images is chosen as the detail wavelet coefficient of the fused image. This ensures maximum detail preservation in the fused image. The approximate wavelet coefficients are generated by av- eraging individual approximate wavelet coefficients. In more sophisticated algorithms, LF and HF coefficients are weighted based on the certain features like local energy, local entropy, matching degree, and so on. A study of such fusion rules is presented by B. Zhang in [20]. Along with separable wavelet transform, use of non- separable wavelet transforms and other variants of wavelet transform is also a common practice in many image fusion applications. Singh and Khare [13] used Daubechies’ complex wavelet transform for multi-modal medical image fusion. At International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181 Published by, www.ijert.org NTASU - 2020 Conference Proceedings Volume 9, Issue 3 Special Issue - 2021 495