3974 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 53, NO. 7, JULY 2015
A Multivariate Empirical Mode Decomposition
Based Approach to Pansharpening
Syed Muhammad Umer Abdullah, Naveed ur Rehman, Muhammad Murtaza Khan, and Danilo P. Mandic
Abstract—We propose a novel class of schemes for the pan-
sharpening of multispectral (MS) images using a multivariate
empirical mode decomposition (MEMD) algorithm. MEMD is
an extension of the empirical mode decomposition (EMD) algo-
rithm, which enables the decomposition of multivariate data into
its intrinsic oscillatory scales. The ability of MEMD to process
multichannel data directly by performing data-driven, local, and
multiscale analysis makes it a perfect match for pansharpen-
ing applications, a task for which standard univariate EMD is
ill-equipped due to the nonuniqueness, mode-mixing, and mode-
misalignment issues. We show that MEMD overcomes the
limitations of standard EMD and yields improved spatial and
spectral performance in the context of pansharpening of MS
images. The potential of the proposed schemes is further demon-
strated through comparative analysis against a number of stan-
dard pansharpening algorithms on both simulated Pleiades and
real-world IKONOS data sets.
Index Terms—Image fusion, multi-resolution analysis, multi-
variate empirical mode decomposition, pansharpening.
I. I NTRODUCTION
T
YPICAL remote sensing applications, such as the discrim-
ination of land cover types and soil erosion prediction,
make use of multispectral (MS) images because of their rich
spectral content. The MS images, however, exhibit poor spatial
resolution, which is prohibitive of their use in identifying tex-
tures or accurately determining the shape of different objects.
To alleviate this problem, panchromatic (PAN) images, provid-
ing high-resolution spatial data (but poor spectral resolution),
are typically fused with MS images, yielding an improved MS
image with high spatial and spectral resolution. This process
of generating a high-spatial-resolution MS image is referred to
as pansharpening [1]–[3]. A number of techniques have been
developed for pansharpening, which can be broadly classified
Manuscript received July 17, 2013; revised November 26, 2014; accepted
December 28, 2014. This work was supported by a Grant from the Higher
Education Commission Government of Pakistan.
S. M. U. Abdullah is with Halliburton Worldwide Limited, Islamabad 44000,
Pakistan (e-mail: umerabdullah30@ee.ceme.edu.pk).
N. ur Rehman is with the Department of Electrical Engineering, COM-
SATS Institute of Information Technology, Islamabad 44000, Pakistan (e-mail:
naveed.rehman@comsats.edu.pk).
M. M. Khan is with the School of Electrical Engineering and Computer
Science, National University of Sciences and Technology, Islamabad 46000,
Pakistan (e-mail: muhammad.murtaza@seecs.edu.pk).
D. P. Mandic is with the Department of Electrical and Electronic Engineer-
ing, Imperial College London, London SW7 2AZ, U.K. (e-mail: d.mandic@
imperial.ac.uk).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TGRS.2015.2388497
into: 1) component substitution (CS) methods; 2) restoration-
based methods; and 3) multiresolution analysis (MRA)
techniques.
CS [4] is a class of computationally inexpensive techniques
that yield pansharpened images that are spatially sharp but may
suffer from spectral distortions. Typically, the CS-based ap-
proaches involve the following steps: upsampling, transforma-
tion, intensity matching, CS, and inverse transformation. The
most popular among this class is the intensity-hue-saturation
(IHS) technique [5] in which the intensity component I gener-
ated from MS images is replaced with a high-spatial-resolution
PAN image. Although quite simple to implement, it generally
causes color (spectral) distortion in the output image as the
local properties of I and the PAN image differ, even when I
is extracted in an adaptive manner [6]. Principal-component-
analysis-based fusion [7] operates by decorrelating the channels
of the input MS image and replacing the resulting channel
exhibiting the highest variance with the PAN image. A slightly
different approach adopts Gram–Schmidt (GS) orthogonaliza-
tion of the MS and I images for fusion purposes [8].
Restoration-based pansharpening methods have been re-
cently proposed in which a high-resolution MS image is re-
stored by exploiting the linear relationship between the PAN
and the ideal MS bands [9]. More recently, restoration methods
exploiting sparse representation of images have been proposed
to address the problem of pansharpening: Li and Yang first
adopted a compressed sensing technique for this purpose [10].
An improvement over that method was proposed in [11], in
which a joint dictionary of oversampled low-resolution MS
and high-resolution PAN images was constructed, enabling
the proposed method to be used for real-world data. Both the
above methods, however, require a large collection of MS and
PAN images for their operation. To overcome that problem,
sparsF I [12] explores sparse representation of MS image
areas in a dictionary trained only from PAN images at hand,
thus allowing its applications in a broader class of input signals.
In addition, a two-step sparse-coding method was also proposed
for pansharpening, which uses a patch normalization strategy to
retain spectral information [13].
The MRA methods operate by decomposing the data in terms
of their frequency components, which are then intelligently
combined to obtain the final image via the multiscale fusion
procedure. In the pansharpening application, MRA methods
are typically based on the ARSIS concept [2], assuming that
the missing spatial information in the low-resolution MS image
can be obtained from the corresponding high-resolution PAN
image. Thus, MRA methods operate by separating the high-
frequency components of the PAN image and injecting them
into the MS image. Typical examples are the methods based
0196-2892 © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.