3626 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 9, SEPTEMBER 2006
[16] R. H. C. Takahashi, R. R. Saldanha, W. Dias-Filho, and J. A. Ramírez,
“A new constrained ellipsoidal algorithm for nonlinear optimization
with equality constraints,” IEEE Trans. Magn., vol. 39, no. 3, pp.
1289–1292, 2003.
[17] P. J. de Oliveira, R. C. L. F. Oliveira, V. J. S. Leite, V. F. Montagner,
and P. L. D. Peres, “ guaranteed cost computation by means of pa-
rameter-dependent Lyapunov functions,” Int. J. Syst Sci., vol. 35, no.
5, pp. 1053–1061, 2004.
[18] ——, “ guaranteed cost computation by means of parameter-de-
pendent Lyapunov functions,” Automatica, vol. 40, pp. 305–315, Apr.
2004.
[19] M. C. de Oliveira, J. C. Geromel, and J. Bernussou, “Extended
and norm characterizations and controller parametrizations for
discrete-time systems,” Intl. J. Control, vol. 75, no. 9, pp. 666–679,
2002.
[20] H. Edelsbrunner and D. R. Grayson, “Edgewise subdivision of a
simplex,” Discrete & Computational Geometry, vol. 24, pp. 707–719,
2000.
[21] J. Bey, “Simplicial grid refinement: On Freudenthal’s algorithm and
the optimal number of congruence classes,” Numerische Mathematik,
vol. 85, pp. 1–29, 2000.
Convergence Analysis of a Deterministic Discrete Time
System of Feng’s MCA Learning Algorithm
Dezhong Peng and Zhang Yi
Abstract—The convergence of minor-component analysis (MCA) al-
gorithms is an important issue with bearing on the use of these methods
in practical applications. This correspondence studies the convergence
of Feng’s MCA learning algorithm via a corresponding deterministic
discrete-time (DDT) system. Some sufficient convergence conditions are
obtained for Feng’s MCA learning algorithm with constant learning rate.
Simulations are carried out to illustrate the theory.
Index Terms—Deterministic discrete-time (DDT) system, eigenvalue,
eigenvector, minor-component analysis (MCA), neural network.
I. INTRODUCTION
The minor component is the direction in which the data has the
smallest variance, contrary to the principal component, which is the
direction in which the data has the largest variance. Minor-component
analysis (MCA) is a statistical method for extracting minor compo-
nents. As an important tool for signal processing and data analysis,
MCA has been applied to total least squares (TLS) [1], [2], moving
target indication [3], clutter cancellation [4], computer vision [5], curve
and surface fitting [6], digital beamforming [7], frequency estimation
[8], [9], and bearing estimation [10] etc.
Many neural learning algorithms have been proposed to solve the
problem of MCA (e.g., see [6], [11]–[13]). All of these MCA learning
algorithms are described by stochastic discrete time (SDT) system.
It is very important to analyze the convergence of MCA learning
Manuscript received January 25, 2005; revised October 15, 2005. The as-
sociate editor coordinating the review of this manuscript and approving it for
publication was Dr. David J. Miller. This work was supported by the National
Science Foundation of China under Grant 60471005.
The authors are with the Computational Intelligence Laboratory, School
of Computer Science and Engineering, University of Electronic Science and
Technology of China, Chengdu 610054, China (e-mail: pengdz@uestc.edu.cn;
zhangyi@uestc.edu.cn; website: http://cilab.uestc.edu.cn/person/zhangyi/
index.html).
Digital Object Identifier 10.1109/TSP.2006.877662
algorithms. However, it is difficult to study the convergence of the
SDT system directly. To indirectly analyze the convergence of MCA
learning algorithms, a traditional method is to transform an MCA
algorithm into a corresponding deterministic continuous-time (DCT)
system, the convergence of the MCA algorithm then can be interpreted
by studying the convergence of the DCT system. The DCT method is
based on a fundamental theorem of stochastic approximation theory
[14]. To use this fundamental theorem of stochastic approximation,
some crucial conditions must be satisfied. One important condition is
that the learning rate of MCA algorithms must approach zero. How-
ever, this restrictive condition cannot be satisfied in many practical
applications due to the roundoff limitation and tracking requirements.
Thus, from application points of view, the DCT method is not reason-
able for studying the convergence of MCA algorithms.
Recently, the deterministic discrete-time (DDT) method has been
used to study Oja’s stochastic PCA learning algorithm [15], [16]. This
DDT method transforms Oja’s stochastic PCA learning algorithm into
a deterministic discrete time system. It does not require the learning
rate to approach zero. DDT systems preserve the discrete time nature
of original SDT systems and can shed some light on the convergence
characteristics of SDT systems.
The solution of the TLS problem has the wide applications in such
areas as economics, signal processing, and automatic control (see, for
example, [17]–[19]). The -dimensional TLS solutions can be ob-
tained by computing a singular value decomposition (SVD) [20], [21],
generally requiring computational complexity or by a modi-
fied recursive least squares (RLS) [22], which requires com-
putational complexity. To solve online the TLS problem in adaptive
finite-impulse-response (FIR) filtering, Davila [23] proposed a fast re-
cursive total least-squares (RTLS) algorithm that is based on gradient
search for the generalized Rayleigh quotient along the Kalman gain
vector and has computational complexity. Recently, a novel fast
RTLS algorithm is proposed in [24] that depends on the minimiza-
tion of the constrained Rayleigh quotient and achieves the good per-
formances that are closely consistent with those of Davila’s algorithm.
On the other hand, based on the minimum mean-square error, Feng
et al. [25] proposed a total least-mean-squares (TLMS) algorithm to
solve the TLS problems. This algorithm can be applied to extract the
minor component of the autocorrelation matrix of input signal adap-
tively. For convenience, we refer to this algorithm as Feng’s MCA al-
gorithm. The convergence of Feng’s MCA algorithm is proven in [25]
via DCT method. As discussed above, the DCT method requires the
learning rate to approach zero which is not practical in many appli-
cations. In this correspondence, we study the convergence of Feng’s
MCA algorithm with constant learning rate via DDT method. Mathe-
matic proofs will be given in detail to prove the convergence.
This correspondence is organized as follows. Some preliminaries are
presented in Section II. In Section III, an invariant set is derived. The
convergence analysis is given in Section IV. Simulations are carried
out in Section V. Finally, in Section VI, the conclusion follows.
II. PRELIMINARIES
Consider a single linear neuron with the following input output
relation:
(1)
where is the neuron output, the input sequence
is a zero-mean stationary stochastic process, and
( ) is the weight vector of the neuron. The
target of MCA is to extract the minor component from the input data by
1053-587X/$20.00 © 2006 IEEE