IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 8, AUGUST 1997 1987 Fast LMS/Newton Algorithms Based on Autoregressive Modeling and Their Application to Acoustic Echo Cancellation B. Farhang-Boroujeny, Senior Member, IEEE Abstract— In this paper, we propose two new implementa- tions of the LMS/Newton algorithm for efficient realization of long adaptive filters. We assume that the input sequence to the adaptive filter can be modeled as an autoregressive (AR) process whose order may be kept much lower than the adaptive filter length. The two algorithms differ in their structural complexity. The first algorithm, which will be an exact implementation of the LMS/Newton algorithm if the AR modeling assumption is accurate, is structurally complicated and fits best into a digital sigal processing (DSP)-based implementation. On the other hand, the second algorithm is structurally simple and is tailored more toward very large-scale integrated (VLSI) custom chip design. Analyses of the proposed algorithms are given. It is found that for long filters, both algorithms perform about the same. However, for short filters, a noticeable difference between the two may be observed. Simulation results that confirm our theoretical findings are given. Moreover, experiments with speech signals for modeling the acoustics of an office room show the superior convergence of the proposed algorithms when compared with the normalized LMS algorithm. I. INTRODUCTION T HE LEAST mean square (LMS) algorithm and the least squares (LS) scheme are two different methods for im- plementation of adaptive filters [1]–[3]. The conventional LMS algorithm has the distinct advantages of simplicity and robustness to numerical error. However, its convergence per- formance degrades significantly when the input process to the adaptive filter is highly colored. On the other hand, the LS-based algorithms exhibit much better convergence but are complex to implement and are very sensitive to numerical error accumulation. To improve on the convergence of the LMS algorithm, some variations of that have been proposed [3]–[6]. The LMS/Newton algorithm is one of these variations that, for real-valued data, is implemented according to the recursive equation (1) where is the filter tap- weight vector, denotes matrix or vector transpose, is the filter input vector, Manuscript received February 23, 1996; revised February 4, 1997. The associate editor coordinating the review of this paper and approving it for publication was Dr. Stephen M. McLaughlin. The author is with the Department of Electrical Engineering, National University of Singapore, Singapore (e-mail: elefarhg@leonis.nus.sg). Publisher Item Identifier S 1053-587X(97)05782-6. Fig. 1. System modeling application of adaptive filters. is an estimate of the input correlation matrix denotes statistical expectation, is the algorithm step size, is the measured error at the filter output, is the desired output, and is the filter output. Fig. 1 depicts an adaptive filter when used to estimate the model of a plant Note that the plant output is contaminated by an additive noise This is the model we use for the acoustic echo cancellation problem, which will be addressed later as a potential application of the algorithms proposed in this paper. The ideal LMS/Newton algorithm is an artificial version of (1) that assumes is known. Although impractical, this is a useful algorithm, as it can be analyzed, and the result of such analysis gives a good prediction of the expected performance of the LMS/Newton algorithm and its quasi versions [9], [10]. In this paper, we propose two new algorithms for effec- tive implementation of the LMS/Newton algorithm for long adaptive filters. An important application of the proposed algorithms is in acoustic echo cancellation, where adaptive filters with over 1000 taps are usually needed. In the proposed algorithms, to deal with the highly computationally demanding term in (1), the input sequence is modeled as an autoregressive (AR) process whose order is much smaller than the filter length As a result, the computational complexity of the proposed algorithms remains equal to that of the conventional LMS algorithm (i.e., multiplications and additions) plus a negligible overhead for updating the vector A predecessor to the present work, which has motivated our study, is the work of Moustakides and Theodoridis, [7], where 1053–587X/97$10.00 1997 IEEE