Transient Analysis of the Euclidean Direction
Search (EDS) Algorithm
Zhongkai Zhang, Tamal Bose and Jacob Gunther
Center for High-speed Information Processing (CHIP)
Electrical and Computer Engineering
Utah State University, Logan, UT 84322–4120
Email: zhongkai@cc.usu.edu, tbose@ece.usu.edu, and jake@ece.usu.edu
Abstract—In this paper, a transient analysis is performed for
a least squares based adaptive algorithm, Euclidean Direction
Search algorithm. The transient analysis is characterized by
derivations of the energy conservation relation and the learning
curve equation. The learning curve equation is particularly
important because it describes the learning mechanism of the
algorithm without an explicit recursion for the weight vector.
I. I NTRODUCTION
Adaptive filters have been successfully applied to diverse
fields including digital communications, speech recognition,
control systems, radar, sonar, seismology and biomedical en-
gineering. A wide variety of adaptive algorithms have been
developed in the literature. Some popular algorithms and their
properties can be found in [1]– [7]. The Least Mean Square
(LMS) algorithm is still very popular due to its simplicity in
computation and implementation. The computational complex-
ity of the LMS is O(N ) multiplications. However, it is well
known that LMS-type algorithms only minimize the estimation
error on average. A step size parameter may be used to
trade off between the convergence rate and steady-state error.
Recursive Least Squares (RLS) algorithms have computational
complexity of O(N
2
) and have a significantly faster conver-
gence rate than LMS. In addition, the RLS algorithm has zero
excess mean square error (MSE). Due to its fast convergence
rate and zero excess MSE, the RLS algorithm is still used
as a benchmark for other adaptive algorithms. Some other
algorithms based on least squares have also been developed
[15]-[16]. The Conjugate Gradient (CG) algorithm [12]-[14]
is based on updating the tap weights with new directions that
are conjugate to each other. It is useful for some applications
because of its appealing convergence performance, but the
computational complexity is O(N
2
). Recently, another least
squares based algorithm called the Euclidean Direction Search
(EDS) algorithm has been derived [8]-[10]. It is an effective
approach that combines the advantages of mean square based
and least square based algorithms. Its fast version has an O(N )
multiplication computational complexity and a convergence
rate comparable to that of the RLS.
In recent years, there has been some work on the transient
analysis of adaptive filters [2]-[5]. In [2], an unified energy-
based approach for the transient and steady state analysis
of adaptive filters was developed and applied to the LMS
algorithm and its normalized version for Gaussian regressors.
In [3], a transient analysis for the LMS algorithm is done under
more general conditions, where the error nonlinearity function
is not fixed. In [7], a general transient analysis for RLS is also
derived.
In this paper we develop some fundamental results on the
transient analysis of the EDS algorithm. In particular, we
derive the energy conservation relationship and the learning
curve equation. The paper is organized as follows. In section
II, a brief background and a new update equation is given
for the Euclidean Direction Search algorithm. In section III,
we use weight estimation errors and weight norms to derive
the energy conservation relationship and the learning curve
equation for the EDS algorithm. Section IV is the conclusion.
A. Notation
Small boldface letters are used to denote column vectors,
e.g.,w. The superscript T denotes transposition. The notation
||w||
2
represents the squared norm of a vector w, ||w||
2
=
w
T
w. The notation ||w||
2
Σ
represents the weight squared norm
||w||
2
Σ
= w
T
Σw where Σ is a symmetric positive semi-
definite matrix. The index n always denotes the iteration time.
l.h.s represents the left hand side of an equation and r.h.s
represents the right hand side. We focus on real valued data
but it is straightforward to extend the results to complex valued
data.
II. ANALYSIS OF EDS ALGORITHM
The EDS (Euclidean Direction Search) algorithm is a rel-
atively new least squares based algorithm. It was originally
derived in order to combine the benefits of fast convergence
of RLS and the low computational complexity of LMS [8],
[9], [10]. The fast version of the EDS algorithm is described
in [11] and is called the fast EDS or FEDS. In this paper,
only the original EDS algorithm is considered. The main
ideas of EDS are briefly summarized for background. The
exponentially weighted least squares cost function is J
n
(w)
Δ
=
∑
n
i=1
λ
n-i
e
2
(i), where e(i)= d(i) - w
H
(n)x(i). Expanding
out the sum shows that the cost function is quadratic,
J
n
(w)= w
T
Q(n)w - 2w
T
r(n)+ σ
2
d
(n), (1)
where the explicit dependence of w on time n has been
dropped and with the following definitions,
1554 0-7803-8622-1/04/$20.00 ©2004 IEEE