ACCURACY EVALUATION OF FIXED-POINT APA ALGORITHM Romuald ROCHER, Daniel MENARD, Olivier SENTIEYS, Pascal SCALART ENSSAT/IRISA University of Rennes I 6 rue de Kérampont, BP447 22300 Lannion name@enssat.fr ABSTRACT The implementation of adaptive lters with xed-point arithmetic requires to evaluate the computation quality. The accuracy can be determined by calculating the global quan- tization noise power in the system output. In this paper, a new model for evaluating analytically the global noise po- wer in the APA algorithm is developed. The model is pre- sented and applied to the NLMS-OCF. The accuracy of our model is analyzed by experimentations. 1. INTRODUCTION The aim of adaptive lters is to estimate a sequence of scalars from an observation sequence ltered by a system in which coefcients vary. These coefcients converge to- wards the optimum coefcients which minimize the mean square error (MSE) between the ltered observation signal and the desired sequence. This type of lters is used in dif- ferent elds such as noise cancellation, equalization, linear prediction and channel estimation. The different algorithms for adaptive ltering are mainly classied in two types : Re- cursive Least Square (RLS) and Least Mean Square (LMS). Nevertheless, the LMS algorithm is the most common used in embedded real-time applications because its implemen- tation is more simple than the RLS algorithm. However, the Afne Projection Algorithms (APA) have been developed very recently [3] to have a faster convergence compared to the LMS and to reduce complexity compared to RLS. The convergence behavior of this algorithm has been studied in [4] and [5] but no study is available of its xed-point implementation. For embedded systems, the use of xed- point arithmetic is required because it is less expensive in terms of cost and power consumption than the oating- point arithmetic. But, the xed-point processing introduces an error called quantization noise. These different quanti- zation noise sources are propagated in the system and lead to an output quantization noise. The power of this quanti- zation noise is determined to compute the signal to quanti- zation noise ratio (SQNR). The knowledge of the analyti- cal expression of the SQNR allows to determine the system xed-point specication for a given SQNR minimal value. Some different models have been proposed for the LMS al- gorithm as in [6] but no model have been proposed for the APA algorithm. So, the aim of this paper is to nd an analytical expres- sion of the noise power in the APA algorithm for all types of quantization (rounding, convergent rounding and trunca- tion). In convergent rounding, the mean of a noise is equal to zero which is not valid for quantization by rounding and truncation [1]. In section 2, the xed-point APA algorithm is described and its output is analytically determined in sec- tion 3. The model developed is applied to the NLMS with Orthogonal Correction Factors algorithm (NLMS-OCF) in section 4. To nish, in section 5, the accuracy of the model is evaluated by simulations. 2. FIXED-POINT IMPLEMENTATION The innite precision APA algorithm can be described as follows en = yn - X t n wn (1) wn+1 = wn + µXn[X t n Xn + δIK] 1 en (2) where xn represents the N size vector input data [x(n),x(n- 1),...x(n-N +1)] t . Let denote Xn the matrix of K last ob- servation vectors Xn =[xn,xn1,...xnK+1 ]. Thus Xn is a N xK matrix. yn and en are K-tap vectors. δ is a constant used to regularize the matrix X t n Xn and IK the K size identity matrix. The xed-point model of the APA algo- rithm is represented on gure 1. The noise terms must be introduced. The regularization term δ is supposed to be a sum of power of 2. The equations of the APA algorithm become : e n = y n - X t n w n - ηn (3) w n+1 = w n + µX n [X t n X n + δIK] 1 e n + γn (4) where the prime refers to quantied data. γn is a N vector white-noise due to the computation of X n [X t n X n + δIK] 1 and e n , and is the sum of K multiplication noises. The xed-point APA is described by the following set of equations : X n = Xn + αn (5) y n = yn + βn (6) [X t n X n + δIK] 1 = [X t n Xn + δIK] 1 + νn (7) w n = wn + ρn (8) with αn a N xK matrix, βn and ρn a N size vector. Moreover, νn is a N xK matrix corresponding to the diffe- rence between [X t n X n ] 1 and [X t n Xn] 1 . As demonstra- ted in [2], νn is equal to V - 57 0-7803-8874-7/05/$20.00 ©2005 IEEE ICASSP 2005