IEEE TRANSACTIONS ON EDUCATION, VOL. 41, NO. 1, FEBRUARY 1998 81 Correspondence An Easy Demonstration of the Optimum Value of the Adaptation Constant in the LMS Algorithm Emilio Soria-Olivas, Javier Calpe-Maravilla, Juan F. Guerrero-Martinez, Marcelino Martinez-Sober, and Jos´ e Esp´ ı-L´ opez Abstract—Since the introduction of the LMS algorithm, many variants have been proposed to improve its performance. Doubltless, the most popular is the Normalized LMS, which uses a value for the adaptation constant that assures the fastest convergence. This correspondence shows a new demonstration of the algorithm based on a mathematical approach easier than the usually proposed. Index Terms—Error minimization, filters, LMS algorithm. I. INTRODUCTION The LMS is the most widely used algorithm among those proposed to adapt the coefficients of an FIR filter in order to minimize the mean-square error (MSE) between its output and the desired signal. This popularity is due to two reasons, its numerical robustness and its simple implementation in real-time systems. The equations that define the adaptation of the filter coefficients are [1] (1) (2) where is the error signal, the desired signal, the input signal, and the filter coefficients, defined as where stands for transposed matrix. The parameter in (2) is called the adaptation constant. Many papers have been written about the effect of this constant as far as the stability, convergence and degradation of adaptive filter coefficients are concerned, and many modifications based on setting a particular value for have been proposed, trying to improve some of the char- acteristics of the LMS performance, e.g., convergence, computational speed, or immunity to noisy environments [2]. The Normalized LMS (NLMS) algorithm is a particularly interest- ing variation which fixes the adaptation constant as proportional to the inverse of the energy of the input signal, i.e., When is equal to , the NLMS presents the fastest velocity of convergence among all the values of the adaptation constant. Demonstrations of this property [1] are based on using Lagrange multipliers and, as a result, are quite complex and tedious for students, so a new approach for educational purposes is proposed. Manuscript received August 1994; revised November 1997. The authors are with Grupo de Procesado Digital de Se˜ nales (G.P.D.S.), Facultad de F´ ısica, C/Doctor Moliner 50, E-46100 Burjassot (Valencia), Spain. Publisher Item Identifier S 0018-9359(98)01682-3. II. THEORETICAL DEVELOPMENT By performing an analysis similar to that of Michael and Wu in [3], we write as a Taylor’s expansion of (3) where only the first term is to be considered as the higher order derivatives vanish due to the linearity of the error function. From (1) we obtain (4) Besides, from (2) we write (5) thus (6) As we want to minimize the square error, we consider (7) then, we differentiate it with respect to (8) and make it equal to zero, obtaining that (9) So we demonstrate that the optimum constant for the LMS, in the sense that it minimizes the MSE in the instant , is the inverse of the energy of the input signal. III. CONCLUSIONS We have developed an easy procedure to show that the NLMS is an optimization of the basic LMS as it minimizes the MSE of the adaptive system. The demonstration is based on a simple Taylor’s expansion and the application of the MSE criterion, avoiding thus other more complex mathematical tools such as Lagrange multipliers. REFERENCES [1] S. Haykin, Adaptive Filter Theory. Englewood Cliffs, NJ: Prentice- Hall, 1991. [2] P. M. Clarkson, Optimal and Adaptive Signal Processing. Boca Raton, FL: CRC, 1993. [3] W. B. Mikhael and H. Wu, “Fast algorithms for block FIR adaptive dig- ital filtering,” IEEE Trans. Circuits Syst., vol. CAS-34, pp. 1152–1160, Oct. 1987. 0018–9359/98$10.00 1998 IEEE