ComputersElect. Engng Vol. 19, No. 4, pp. 333-341,1993 0045-7906/93 $6.00 + 0.00
Printedin GreatBritain.All rightsreserved Copyright © 1993 Pergamon Press Ltd
THE RECURSIVE NEURAL NETWORK AND ITS
APPLICATIONS IN CONTROL THEORY
DON HUSH, CHAOUKI ABDALLAH and BILL HORNE
Department of Electrical Engineeringand Computer Engineering, Universityof New Mexico,
Albuquerque, NM 87131, U.S.A.
(Received 1 September 1990; accepted in final revisedform 20 November 1991)
Abstract--This paper introducesa new dynamicalneural network and a corresponding learningalgorithm
based on a gradient search. A model-following controller using the network is presented and is shown
to be useful in the identificationand control of discretenonlinear systems.A discussionof the advantages
and limitations of the new network is also included.
1. INTRODUCTION
In general the control of nonlinear systems is a difficult task where intuition and experience are
the guiding approaches. Even when a controller is found, its implementation is far from simple.
Nonlinear controllers are basically nonlinear dynamical mappings which may not have a simple
analytical form. On the other hand, some neural networks have been shown to be able to implement
arbitrarily complex static mappings [1]. In order to use neural networks in control one must make
them dynamic and describe the control objective in a neural network language.
This paper introduces a new dynamical neural network and a corresponding learning algorithm
based on a gradient search. The network is shown in Fig. 1 and may be used for modeling or
controlling discrete-time nonlinear systems. It is a single-input single-output nonlinear dynamical
system. The output of the network is the sum of the outputs of two subnets, a nonrecursive and
a recursive subnet, both of which are multilayer perceptrons (MLPs). The inputs to the
nonrecursive subnet are current and delayed (previous) versions of the input sequence, and the
inputs to the recursive subnet are delayed versions of the network output. Because of its
resemblance to a recursive digital filter we refer to it as the recursive neural net, RNN. In fact if
the MLPs are replaced by ADALINES [2] the RNN reduces to a linear recursive digital filter (IIR
filter) and the learning algorithm reduces to the well-known recursive LMS (RLMS) algorithm [2].
The work in this paper is closely related to the work of Narendra in Ref. [3]. Narendra describes,
in diagrammatic fashion, a variety of nonlinear dynamical models which make use of the multilayer
perceptron without providing a detailed description of the corresponding update algorithms. This
paper presents such a description along with some properties of the resulting network. The RNN
lends itself to a variety of uses such as nonlinear prediction, forward or inverse modeling for a
nonlinear system, or as a controller for a nonlinear plant. A few of these applications are illustrated
in Fig. 2.
The remainder of this paper is organized as follows. Section 2 describes the operation of the RNN
and introduces the notation that will be used. Section 3 derives a learning algorithm for the RNN
based on a gradient search. Section 4 presents an application of the RNN in modeling a nonlinear
plant and Section 5 contains a summary and discussion.
2. THE RECURSIVE NEURAL NET
The output of the recursive neural netwok at time k as shown in Fig. 1, is the sum of the outputs
of two subnets, the nonrecursive and recursive subnet:
u(/O = v~(/~) + v~,(/c). (l)
Each of these networks is a multilayer perceptron. The superscripts n and r are used to distinguish
between parameters in the nonrecursive and recursive subnets respectively. Let L and M represent
the number of layers in the nonrecursive and recursive subnets, respectively. If we let ¢t(k) represent
333