2230 IEEE SENSORS JOURNAL, VOL. 14, NO. 7, JULY 2014 Adaptive Compressive Fusion for Visible/IR Sensors Amina Jameel, Abdul Ghafoor, and Muhammad Mohsin Riaz Abstract—An image fusion scheme is proposed for visible and infrared sensors, which adaptively adjusts the number of com- pressive measurements depending on the amount of information. Simulation results show that the proposed scheme is a significant improvement compared with existing schemes. Index Terms— Image fusion, compressive sensing, entropy. I. I NTRODUCTION I MAGE fusion is used to combine information of different sensors (like visible and Infrared (IR)) at pixels, features, and decision levels [1]. Beside others, entropy dependent fusion schemes are more useful since entropy is directly linked with image information [1], [2]. Compressive Sensing (CS) based fusion schemes [3]–[6] decompose the images using characteristics like sparsity and over-completeness. CS and standard deviation based scheme [4] has limited application area and non optimal sparse representation. CS and wavelet (shearlet) transform based scheme [6] is used to effectively capture the smooth contours. However, these schemes [4], [6] work on whole image and as a consequence sometimes yield unwanted artifacts. State of art fusion schemes [3], [7] are developed based on overlapping patches rather than the whole image. K-means singular value decomposition based scheme suffers from computational complexity [7]. Simultaneous Orthogonal Matching Pursuit (OMP) is used to improve time complex- ity [3]. However, the number of Compressive Measurements (CMs) are same for each patch in [3] and [7]. An entropy dependent CS based image fusion scheme is proposed for visible and IR sensors. Number of CMs are adjusted adaptively depending on the amount of information. Simulation results show that the proposed scheme yields accurate and efficient fusion. II. PROPOSED I MAGE FUSION Let I F be the fused image obtained by combining input images I A =[ I A 1 , I A 2 ,..., I A N ] and I B =[ I B 1 , I B 2 ,..., I B N ] of the same size M× N . The vector V A =[ I T A 1 , I T A 2 ,..., I T A N ] T (of size MN × 1) is obtained by concatenating columns of I A (similar procedure is adopted for I B ). The sparse Manuscript received December 20, 2013; accepted April 16, 2014. Date of publication April 28, 2014; date of current version May 29, 2014. The associate editor coordinating the review of this paper and approving it for publication was Prof. Alexander Fish. A. Jameel and A. Ghafoor are with the Military College of Signals, National University of Sciences and Technology (NUST), Rawalpindi 46000, Pakistan (e-mail: amina.phd@students.mcs.edu.pk; abdulghafoor-mcs@nust.edu.pk). M. M. Riaz is with the Centre for Advanced Studies in Telecommunication, Comsats, Islamabad 44000, Pakistan (e-mail: mohsin.riaz@comsats.edu.pk). Digital Object Identifier 10.1109/JSEN.2014.2320721 representation (i.e. constructing signal as a linear combination of atoms φ l ) V A S of V A is [3], [7], V A = V A S = L l =1 V A S l φ l (1) where, the dictionary =[φ 1 2 ,...,φ L ] of size L > MN is over-complete. The constraint minimization solution of the above undetermined problem is, ˆ V A S = argminV A S 0 subject to V A = V A S (2) The above optimization is an NP-hard problem, hence approx- imate solutions are considered [3]. OMP algorithm is used to solve the sparse approximation problem [3]. Note that existing schemes [3], [7] have fixed number of CMs. However, the information in some images is contained only in a certain part. Instead of using the same number of CMs for every patch η, an appropriate solution is to adjust the compression by taking into account the information in a specific patch. The idea of adjusting number of CMs for image fusion is never explored (to the best of author’s knowledge). Different statistical measurements (mean, variance and entropy etc.) can be used to calculate the information in an image. Here we have used entropy because a higher entropy value indicates more information content in the image and vice versa. The entropy h m of m th row is, h m = k p m (k ) log( p m (k )) (3) We observed that the histogram of entropy is condensed around the mean value (0.4-1.2). The values below this range contain less amount of information and require less CMs. A threshold value T is defined as, T = 0.75 h (4) The mean entropy value is h = 1 M M m=1 h m . The value 0.75 was chosen since it makes the threshold approximately same as the lower limit of required entropy range. A lower threshold value may slightly increase the results but at the expense of more CMs. η = 64 if h m T 16 if h m < T (5) These values provide a trade off between accuracy and number of CMs. Maximum absolute rule is then applied to get fused measurement ˆ V F S , i.e., ˆ V F S = χ | ˆ V A S |, | ˆ V B S | (6) where χ is point wise maximum operator. 1530-437X © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.