1402 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 6, NOVEMBER 1999 Nonlinear Adaptive Trajectory Tracking Using Dynamic Neural Networks Alexander S. Poznyak, Member, IEEE, Wen Yu, Member, IEEE, Edgar N. Sanchez, Senior Member, IEEE, and Jose P. Perez Abstract— In this paper the adaptive nonlinear identification and trajectory tracking are discussed via dynamic neural net- works. By means of a Lyapunov-like analysis we determine stability conditions for the identification error. Then we analyze the trajectory tracking error by a local optimal controller. An algebraic Riccati equation and a differential one are used for the identification and the tracking error analysis. As our main original contributions, we establish two theorems: the first one gives a bound for the identification error and the second one establishes a bound for the tracking error. We illustrate the effectiveness of these results by two examples: the second-order relay system with multiple isolated equilibrium points and the chaotic system given by Duffing equation. Index Terms—Adaptive control, dynamic neural networks. I. INTRODUCTION C ONTROL problems that are arising in a wide variety of engineering fields are characterized by essential uncertain environments and nonlinearities. Resent results ([4], [13], [12], [16], and [24]) show that neural-network (NN) technique seems to be a very effective tool to control a wide class of complex nonlinear system when we have no complete model informa- tion or, even, consider a controlled plant as “a black box.” A comprehensive survey on neuro control may be founded in [6]. The NN’s can be qualified as static (feedforward) or as dynamic (recurrent) nets. The most of publications deals with static neural nets which are implemented for the appropriate approximation of a nonlinear operator function in the right- hand side of dynamic model equations. For example, in [24] a compensator which approximates piecewise continuous functions is constructed for actuator nonlinearities. The chief drawback of these NN’s is that the weight updates do not utilize the information on the local NN structure and the func- tion approximation is sensitive to the training data. In reality, when we deal with a complex system, as a distillation column or multiarmed robot, it is practically unrealizable condition because not all state components can be measured. Dynamic neural nets can successfully overhead this disadvantage as well as demonstrate a workable behavior in the presence Manuscript received November 13, 1996; revised July 9, 1998 and May 1, 1999. A. S. Poznyak and W. Yu are with the Departamento de Control Automatico, CINVESTAV-IPN, Mexico D.F., 07360, Mexico. E. N. Sanchez is with the CINVESTAV, Unidad Guadalajara, Gaudalajara, Jalisco, C.P. 45091, Mexico. J. P. Perez is with the School of Mathematics and Physics, Univ. Aut. de Nuevo Leon (UANL), N.L., C.P 66450, Mexico. Publisher Item Identifier S 1045-9227(99)09117-1. of unmodeled dynamics because their structure corporate feedback. They have powerful representation capabilities. The first dynamic neural nets have been introduced by Hopfield [5] and then studied in [19], [17], and [16]. There are two general concept of recurrent structure training. Fixed point learning is aimed at making the NN reach the prescribed equilibria at perform steady-state matching. Trajectory learning trains the network to follow the desired trajectory in time. In this paper we follow to the second approach: we construct a dynamic -identifier and then, based on it we derive an adaptive tracking controller. As it is mentioned above, the nonlinear system identification process turns out to be one of the central parts in constructing a successful tracking controllers. We will treat it as the approximation of a system behavior by dynamic NN’s. In this direction there exist two kinds of results: the first one, as a natural extension, is based on the function approximation properties of static NN’s [23], [3]. The second one uses the operator representation of the system to derive conditions for the validity of its approximation by a dynamic NN. It was analyzed by Sandberg, both for continuous and discrete time ([21] and references therein). The structure proposed is constituted by the parallel connection of neurons, with no interaction between them; it is required that the nonlinear system fulfills the approximately-finite memory condition. In [1], a dynamic NN was proposed for nonlinear systems identification using operator representation, the approximation property was proposed as a conjecture. Using the fading memory condition, this conjecture was partially proved in [20]. Both of them, the approximately finite memory and the fading memory conditions, require the nonlinear system to be stable. The above results only give some conditions for the existence of a dynamic NN’s. They do not determine the number of neurons and/or the value of their weights to effectively obtain the minimum error. A recent result, obtain in [10], solves the problem of the neuron number by means of recursively high-order NN’s. This number is selected to be equal to the dimension of the nonlinear systems state, which is assumed to be completely measurable. This measurability condition is relaxed in [18] for singular perturbed systems. All these results deal only with finite horizon performance indexes. There are not so many stability analyzes in neurocontrol, in spite of reported successful neural control applications, in- cluding neural information storage problems where an energy function studies are used to prove its convergence to desired final value [11]. To the best of our knowledge, there are only 1045–9227/99$10.00 1999 IEEE