Ghassemian, Hassan International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Supplement B2. Amsterdam 2000. 20 MULTI-SENSOR IMAGE FUSION BY INVERSE SUBBAND CODING Hassan GHASSEMIAN Iranian Remote Sensing Center & Department of Electrical Engineering Tarbiat Modares University, Iran Ghessemi@modares.ac.ir KEY WORDS: Data fusion, Multisensor, Feature extraction, Remote sensing, Spatial data, Subband coding. ABSTRACT Efficient multi-resolution image fusion aims to take advantage of the high spectral resolution of Landsat TM images and high spatial resolution of SPOT panchromatic images simultaneously. This paper presents a multi-resolution data fusion scheme, based on subband image decomposition. Motivated by analytical results obtained from high-resolution multispectral image data analysis: the energy packing the spectral features are distributed in the lower frequency subbands, and the spatial features, edges, are distributed in the higher frequency subbands. This allows to spatially enhancing the multispectral images, by adding the high-resolution spatial features (extracted from the higher subbands of a panchromatic image) to them, in an inverse subband coding procedure. This technique finds application in multi- spectral image interpretation, as well as medical images of the same part of body obtained by several different imaging modalities. In this paper, the low resolution Landsat Thematic Mapper images (with 30-m and 75-m pixel size) are spatially enhanced to the 10-m resolution by fusing them with the 10-m SPOT panchromatic data. This method is compared with the IHS and PCA and the Brovey transform methods. Results show it preserves more spectral features with less spatial distortion. 1 INTRODUCTION The aim of remote sensing is acquisition and interpretation of spectral measurements made at a distant location, to obtain information about the Earth’s surface. In order to produce a high accuracy map, the classification process assigns each pixel of the image to a particular class of interest. In remote sensing systems pixels are observed in different portions of electromagnetic spectrum, therefore the remotely sensed images are vary in spectral and spatial resolution. To collect more photons and maintain image SNR, the multispectral sensors (with high spectral resolution and narrow spectral bandwidth) have a larger IFOV (i.e. larger pixel size and lower spatial resolution) compared to panchromatic with a wide spectral bandwidth and smaller IFOV (higher spatial resolution) sensors. With appropriate algorithms it is possible to combine these data and produce imagery with the best characteristics of both, namely high spatial and high spectral resolution. This process is known as a kind of multisensor data fusion. The fused images may provide increased interpretation capabilities and more reliable results. Multisensor image fusion combines two or more geometrically registered images of the same scene into a single image that is more easily interpreted than any of the originals. This technique finds application in remotely sensed multispectral image data interpretation, and they are performed at three different processing levels according to the stage at which the data fusion takes place; are named, pixel level, feature level and decision level (Pohl 1998). At the pixel level, which is the lowest processing level, the measured physical parameters by sensors are merged together (Figure 1). At this level, the higher resolution image is used as the reference to which the lower resolution image is geometrically registered. Therefore, the lower resolution image is up sampled to match the ground sample interval of the higher resolution image. In addition the resampling process, the images must have some reasonable degree of similarity; thus this process requires radiometric correlation between the two images. At the feature level, image fusion requires a robust feature selection scheme for the multisensor images and a sophisticated feature extraction technique (Figure 2). The proposed method in this paper is a feature level image fusion technique. And finally, the decision level image fusion represents a method that uses value-added data where the input images are processed individually for classification (Figure 3).