International Journal of Electrical and Computer Engineering (IJECE) Vol. 15, No. 2, April 2025, pp. 1593~1601 ISSN: 2088-8708, DOI: 10.11591/ijece.v15i2.pp1593-1601 1593 Journal homepage: http://ijece.iaescore.com Two-scale decomposition and deep learning fusion for visible and infrared images Ruhan Bevi Azad 1 , Hari Unnikrishnan 2 , Lokesh Gopinath 1 1 Department of Electronics and Communication Engineering, SRM Institute of Science and Technology, Kattankulathur, India 2 Department of Electronics and Communication Engineering, Saveetha Engineering College, Chennai, India Article Info ABSTRACT Article history: Received Apr 22, 2024 Revised Nov 26, 2024 Accepted Dec 2, 2024 The paper focuses on the fusion of visible and infrared images to generate composite images that preserve both the thermal radiation information from the infrared spectrum and the detailed texture from the visible spectrum. The proposed approach combines traditional methods, such as two-scale decomposition, with deep learning techniques, specifically employing an autoencoder architecture. The source images are subjected to two-scale decomposition, which extracts high-frequency detail and low-frequency base information. Additionally, an algorithmic unravelling technique establishes a logical connection between deep neural networks and traditional signal processing algorithms. The model consists of two encoders for decomposition and a decoder after the unravelling operation. During testing, a fusion layer merges the decomposed feature maps, and the decoder generates the fused image. Evaluation metrics including entropy, average gradient, spatial frequency and standard deviation are employed to subjectively assess fusion quality. The proposed approach demonstrates promise for effectively combining visible and infrared imagery for various applications. Keywords: Algorithm unravelling Deep learning Near-infrared image Traditional method Two-scale decomposition Visible image This is an open access article under the CC BY-SA license. Corresponding Author: Ruhan Bevi Azad Department of Electronics and Communication Engineering, SRM Institute of Science and Technology Kattankulathur, Chengalpattu, 603203, Tamil Nadu, India Email: ruhanb@srmist.edu.in 1. INTRODUCTION In the field of image processing research, image fusion is an emerging topic. Adopting similar methodologies and strategies, enhances the effectiveness, interpretability, and reproducibility of image fusion. Additionally, evaluating models on benchmark datasets and making code and data openly available will contribute to the advancement of research in the field of remote sensing [1]. Incorporating insights from the base paper involves leveraging unsupervised learning techniques and advanced loss functions for image fusion. Enhancing model interpretability and evaluating its performance on benchmark datasets are crucial steps. Additionally, sharing code and data ensures transparency and reproducibility, facilitating further advancements in the field [2]. The refinement fusion approach achieves superior performance in terms of image quality, target region preservation and efficiency [3]. Evaluating the performance and validating it through experiments, enhances the effectiveness and robustness of image fusion. considering the similarities and refining approach and addressing specific challenges in infrared and visible image fusion [4]. There are potential areas for improvement and innovation in methodologies. Leveraging insights from each approach can contribute to the development of more effective and advanced image fusion techniques [5]–[8]. The concept of a unified fusion framework, adaptive information preservation, mitigation of deep learning limitations, and the use of benchmark datasets, enhances the effectiveness, versatility, and evaluation of