Least Squares Based Modification for Adaptive Control Girish Chowdhary and Eric Johnson Abstract— A least squares modification is presented to adap- tive control problems where the uncertainty can be linearly pa- rameterized. The modified weight training law uses an estimate of the ideal weights formed online by solving a least squares problem using recorded and current data concurrently. The modified adaptive law guarantees the exponential convergence of adaptive weights to their ideal values subject to a verifiable condition on linear independence of the recorded data. This condition is found to be less restrictive and easier to monitor than a condition on persistency of excitation of the reference signal. I. INTRODUCTION In plants where the plant uncertainty can be linearly parameterized, the standard approach to the design of Model Reference Adaptive Controllers (MRAC) is to use a weight training law that attempts to estimate the ideal weights in order to cancel the uncertainty. If the weights do converge to their ideal values, the linear part of the MRAC tracking error dynamics dominate, greatly improving the performance and possibly allowing the use of linear metrics to characterize the adaptive controller. Most adaptive control methods are de- signed using a Lyapunov based approach and can be thought of as attempting to minimize a quadratic cost function by using a gradient descent type method. Gradient descent type methods however, are susceptible to local adaptation and weight drift. Furthermore, it can be shown that adaptive laws formulated using the gradient descent methodology are always at most rank 1 [5]. Boyd and Sastry have shown that in order to guarantee weight convergence for these adaptive laws the exogenous reference input must have as many spectral lines as the unknown parameters, a condition relating to Persistency of Excitation (PE) in the reference input [1]. The condition on PE states is required to guarantee parameter convergence for many classic and recent adaptive control laws as well (e.g. σ-mod [8], e-mod [11], Q-mod [16], and L - 1 adaptive control [2]). However, it is hard to monitor whether a signal is PE, and using PE inputs only for weight convergence may waste control effort and cause undue stress. Previously, we have suggested Concurrent Learning adap- tive control, which uses past and current data concurrently for adaptation, for ensuring parameter convergence without requiring PE [5], [4]. In this paper we maintain the idea of using past and current data concurrently for adaptation, however, we use optimal least squares based approach rather than gradient descent for adaptation based on recorded data. G. Chowdhary and Assoc. Prof. E. N. Johnson are with the Daniel Guggenheim school of aerospace engineering at the Georgia Institute of Technology, Atlanta GA, Girish.Chowdhary@gatech.edu, Eric.Johnson@ae.gatech.edu Least squares, which offers the best linear fit for a set of data, has been widely studied for real time parameter estimation. The main contribution of this paper is the development of a modification term that brings the desirable properties of least squares to any MRAC gradient based adaptive law. The modified adaptive law ensures that the adaptive weights converge smoothly to an optimal unbiased estimate of the ideal weights. Furthermore, exponential tracking error and exponential weight convergence can be guaranteed if the recorded data meet a verifiable condition on linear indepen- dence, which is found to be less restrictive than PE. We use a simulation study of wing rock dynamics to demonstrate the effectiveness of the modified adaptive law. II. MODEL REFERENCE ADAPTIVE CONTROL This section discusses the formulation of model reference adaptive control (see e.g. [8], and [14]). Let x(t) ∈< n be the known state vector, let u ∈< denote the control input, and consider the following system where the uncertainty can be linearly parameterized: ˙ x = Ax(t)+ B(u(t) + Δ(x(t))), (1) where A ∈< n×n , B ∈< n , B = [0, 0, ..., 1] T , and Δ(x) <(x) is a continuously differentiable function representing the scalar uncertainty. We assume that the system in 1 is controllable. A reference model can be designed that characterizes the desired response of the system: ˙ x rm = A rm x rm (t)+ B rm r(t), (2) where A rm ∈< n×n is a Hurwitz matrix and r(t) denotes a bounded reference signal. A tracking control law consisting of a linear feedback part u pd = K(x rm (t) - x(t)), a linear feedforward part u crm = K r [x T rm ,r(t)] T , and an adaptive part u ad (x) is proposed to have the following form u = u crm + u pd - u ad . (3) Define the tracking error e as e(t)= x rm (t) - x(t); with an appropriate choice of u crm to satisfy the matching condition Bu crm =(A rm - A)x rm - B rm r(t), the tracking error dynamics can be reduced to ˙ e = A m e + B(u ad (x) - Δ(x)), (4) where the baseline full state feedback controller u pd = Kx is assumed to be designed such that A m = A - BK is a Hurwitz matrix. Hence for any positive definite matrix Q < n×n , a positive definite solution P ∈< n×n exists to the Lyapunov equation, A T m P + PA m + Q =0. (5)