Assessment of Different Fusion Methods Applied to Remote Sensing Imagery A. L. Choodarathnakara #1 , Dr. T. Ashok Kumar *2 , Dr. Shivaprakash Koliwad *3 , Dr. C. G. Patil *4 # Dept. of Electronics & Communication Engineering Government Engineering College, Kushalnagar-571234, INDIA 2, 3 Dept. of Electronics & Communication Engineering and 4 Master Control Facility (MCF) 2 Vivekananda College of Engg. & Technology, Puttur (DK), 3 MCE and 4 MCF, Hassan, INDIA Abstract— Image fusion is a formal framework for combining and utilizing data originating from different sources. It aims at producing high resolution multispectral images from a high-resolution panchromatic (PAN) image and low-resolution multispectral (MS) image. This fused image must contain more interpretable information than can be gained by using the original image. Ideally the fused image should not distort the spectral characteristics of multispectral data as well as, it should retain the basic colour content of the original data. There are many data fusion techniques that can be used which include Principal Component Analysis (PCA), Brovey Transform (BT), Multiplicative Transform (MT) and Discrete Wavelet Transform (DWT). One of the major problems associated with a data fusion technique is how to assess the quality of the fused (spatially enhanced) MS image. This paper presents a comprehensive analysis and evaluation of the most commonly used data fusion techniques. The performance of each data fusion technique is analyzed in both qualitatively and quantitatively. Then, the methods are ranked according to the conclusions drawn from visual analysis and the experimental quantitative results. To study this, a Graphical User Interface (GUI) is developed using MATLAB for image fusion to make the research outcomes available to the end user for commercial and economical activities. Due to the demand for higher classification accuracy and the need in enhanced positioning precision, there is always a need to improve the spectral and spatial resolution of remotely sensed imagery. These requirements can be fulfilled by the utilization of image data fusion techniques in classification problems at a significantly lower expense. Keywords— Image Fusion, Principal Component Analysis, Brovey Transform, Multiplicative Transform and Discrete Wavelet Transform. I. INTRODUCTION Image fusion is a process of combining two or more images, to obtain a new and composite Image using a certain algorithm. Image fusion is to integrate different data in order to obtain more information than that can be derived from each of the single sensor data alone. Image fusion has been applied to achieve a number of objectives like image sharpening, improving geometric correction, complete data set for improved classification, change detection, substitute missing information, replace defective data etc., [1], [2], [12], [14]. Data fusion is a formal framework, expressed as means and tools for the alliance of data originating from different source gives “different quality” means that will depend [2] upon the application. Image fusion is mainly used to enhance the visual interpretation, and improving the image classification because of the following reasons: Image fusion is based on the fusion of data from different satellite sensors. Because of the difference of the various parameters and phase between different sensors, as well as the inevitable registration error, lead to the fusion classification results unsatisfactory. Although the same sensor system provided different spatial resolution images, resulting in poor classification effect, because of its low spatial resolution. Because of the unreasonable fusion or classification method make the failure in classification. A. Image Fusion Principle Fig. 1 Construction of fused pixels using a PAN and XS images The fused image is seen as a linear combination of the PAN and XS images. To create a new fused pixel, corresponding pixels in the PAN and XS images are multiplied by the weighting factors “a” and “b” respectively. The sum of the new weighted pixels from the PAN and XS images will form the new fused pixel. This can be expressed by the following expression F k(m, n) = n (m, n) * I 0(m, n) + b (m, n) I k(m, n) (1) Where, m and n are the row and column numbers, k = 1, 2, 3, …., N (N = number of XS bands); F k is the fused image, I o is PAN image and I k is the XS band. The above relationship is only valid for a certain window, i.e., the ‘a’ and ‘b’ coefficients must be determined by window. For the simplicity of notations, the derivation uses 1-D subscript ‘i’ for window locations and the band number ‘k’ is ignored. For simplicity, 3x3 and 5x5 windows are used in the Fig. 1. To calculate a (6, 6) and b (6, 6) , 3x3 window is A. L. Choodarathnakara et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 3 (6), 2012,5447-5453 www.ijcsit.com 5447