This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1
Deep Feature Alignment Neural Networks for
Domain Adaptation of Hyperspectral Data
Xiong Zhou , Student Member, IEEE, and Saurabh Prasad , Senior Member, IEEE
Abstract—Deep neural networks have been shown to be useful
for the classification of hyperspectral images, particularly when
a large amount of labeled data is available. However, we may not
have enough reference data to train a deep neural network for
many practical geospatial image analysis applications. To address
this issue, in this paper, we propose to use a deep feature
alignment neural network to carry out the domain adaptation,
where the labeled data from a supplementary data source
can be utilized to improve the classification performance in a
domain where otherwise limited labeled data are available. In the
proposed model, discriminative features for the source and target
domains are first extracted using deep convolutional recurrent
neural networks and then aligned with each other layer-by-layer
by mapping features from each layer to transformed common
subspaces at each layer. Experimental results are presented
with two data sets. One of these data sets represents domain
adaptation between images acquired at different times, while the
other data set represents a very unique and challenging domain
adaptation problem, representing source and target images that
are acquired using different hyperspectral imagers that collect
data from different viewpoints and platforms (a ground-based
forward-looking street view of objects acquired at the close
range and an aerial hyperspectral image). We demonstrate
that the proposed deep learning framework enables the robust
classification of the target domain data by leveraging information
from the source domain.
Index Terms— Classification, deep neural network, domain
adaptation, hyperspectral, transformation learning.
I. I NTRODUCTION
W
ITH the rapid increase in the amount of data and com-
puting power, deep learning [1] has achieved magnifi-
cent success in various machine learning tasks, such as image
classification, natural language processing, speech recogni-
tion, and so on. Convolutional neural networks (CNNs) have
been extensively used for image-related applications [2]–[4]
due to their ability to extract localized and informative fea-
tures. Recurrent neural networks (RNNs), on the other hand,
are capable of learning temporal patterns and building good
sequential data models [5]–[7]. As a combination of CNN
and RNN, convolutional RNN (CRNN) [8] takes advantage
Manuscript received May 5, 2017; revised October 15, 2017 and
February 4, 2018; accepted February 27, 2018. This work was supported by
the NASA New Investigator (Early Career) Award under Grant NNX14AI47G.
(Corresponding author: Saurabh Prasad.)
The authors are with the Hyperspectral Image Analysis Group, Department
of Electrical and Computer Engineering, University of Houston, Houston,
TX 77004 USA (e-mail: saurabh.prasad@ieee.org).
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TGRS.2018.2827308
of CNN for extracting localized and discriminative features
and RNN for learning contextual information within the data.
Hence, CRNN has gained more and more attention in classi-
fication [9] and recognition [10] tasks.
Recent advances in remote sensing technology have made
hyperspectral images not only cover large areas with unprece-
dented details, but also capture subtle differences in the spec-
tral signatures of various objects [11]. With such rich spatial
and spectral information, hyperspectral image classification
has attracted a lot of interest from the remote sensing com-
munity [12]. In recent years, many variations of deep neural
networks have been proposed for hyperspectral classification,
including stacked autoencoder [13], deep belief network [14],
1-D/2-D CNN [15]–[17], and CRNN [18]. However, training
supervised deep neural networks, such as CNN and CRNN,
requires a large amount of labeled data, which becomes one
of the main obstacles in deep learning for hyperspectral image
classification. To solve this problem, one option is to reduce
the labeling cost by intelligently selecting samples for labeling,
and the other is to employ unlabeled data. Besides, domain
adaptation techniques provide a unique solution that enables
us to enjoy the labeled data from a supplementary data source.
To accomplish that, domain adaptation approaches transfer
knowledge from the source domain to the target domain
through either creating domain-invariant features or adjusting
the classification model.
Over the past few years, many domain adaptation
approaches have been proposed for hyperspectral image
classification [19]. In [20], a variation of support vector
machine (SVM) was proposed for domain adaptation, where
the classification model was adjusted toward the target domain
by gradually replacing the source training data with the target
training data. In [21], an active learning procedure was used to
select representative samples from the target domain such that
a reliable classifier can be trained for the target data. Unlike the
above-mentioned model adjusting methods, [22] proposed a
semisupervised transfer component analysis (SSTCA) method
that minimizes the domain differences in a reproducing
kernel Hilbert space. Similarly, [23] introduced a method
that reduces the domain-induced changes by learning class-
dependent transformations. References [24]–[26] achieved
domain adaptation by aligning the manifolds of the source
and target data. However, these existing works are mainly
focused on traditional nondeep learning methods. The only
exception is [27], where the stacked denoising autoencoder
was used to generate domain-invariant features.
0196-2892 © 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.