IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 6, NOVEMBER 1999 1321 Stable Dynamic Backpropagation Learning in Recurrent Neural Networks Liang Jin and Madan M. Gupta, Fellow, IEEE Abstract— The conventional dynamic backpropagation (DBP) algorithm proposed by Pineda does not necessarily imply the stability of the dynamic neural model in the sense of Lyapunov during a dynamic weight learning process. A difficulty with the DBP learning process is thus associated with the stability of the equilibrium points which have to be checked by simulating the set of dynamic equations, or else by verifying the stability conditions, after the learning has been completed. To avoid unstable phenomenon during the learning process, two new learning schemes, called the multiplier and constrained learning rate algorithms, are proposed in this paper to provide stable adaptive updating processes for both the synaptic and somatic parameters of the network. Based on the explicit stability con- ditions, in the multiplier method these conditions are introduced into the iterative error index, and the new updating formulations contain a set of inequality constraints. In the constrained learning rate algorithm, the learning rate is updated at each iterative instant by an equation derived using the stability conditions. With these stable DBP algorithms, any analog target pattern may be implemented by a steady output vector which is a nonlinear vector function of the stable equilibrium point. The applicability of the approaches presented is illustrated through both analog and binary pattern storage examples. Index Terms— Adaptive algorithm, dynamic backpropagation algorithm, dynamic neural networks, Lyapunov stability, nonlin- ear dynamics. I. INTRODUCTION D YNAMIC neural networks (DNN’s) which contain both feedforward and feedback connections between the neu- ral layers play an important role in visual processing, pattern recognition, neural computing and control [36], [37]. In neural associative memory, DNN’s which deal with a static target pattern can be divided into two classes according to how the pattern in the network is expressed [6]–[8], [21]: 1) the target pattern (input pattern) is given as an initial state of the network or 2) the target pattern is given as a constant input to the network. In both the cases, the DNN must be designed such that the state of the network converges ultimately to a locally or globally stable equilibrium point which depends only on the target pattern [10]–[12]. In an earlier paper on neural associative memory, Hopfield [3], [4] proposed a well-known DNN for a binary vector pattern. In this model, every memory vector is an equi- Manuscript received September 15, 1998; revised May 28, 1999. L. Jin is with the Microelectronics Group, Lucent Technologies Inc., Allentown, PA 18103 USA. M. M. Gupta is with the Intelligent Systems Research Laboratory, College of Engineering, University of Saskatchewan, Saskatoon, Sask., Canada S7N 5A9. Publisher Item Identifier S 1045-9227(99)09400-X. librium point of the dynamic network, and the stability of the equilibrium point is guaranteed by the stable learning process. Many alternative techniques for storing binary vectors using both continuous and discrete-time dynamic networks have appeared since then [9], [13], [14], [17], [20], and [21]. For the analog vector storage problem, Sudharsanan and Sundereshan [13] developed a systematic synthesis procedure for constructing a continuous-time dynamic neural network in which a given set of analog vectors can be stored as the stable equilibrium points. Marcus et al. [35] discussed an associative memory in a so-called analog iterated-map neural network using both the Hebb rule and the pseudoinverse rule. Atiya and Abu-Mostafa [16] recently proposed a new method using the Hopfield continuous-time network, and a set of static weight learning formulations was developed in their paper. An excellent survey of some previous work on the design of associative memories using the Hopfield continuous-time model was given by Michel and Farrell [22]. A dynamic learning algorithm for the first class of DNN’s where the analog target pattern is directly stored at an equi- librium point of the network was first proposed by Pineda [1] for a class of continuous-time networks. At the same time, a dynamic learning algorithm was described by Almeida [5]. In order to improve the capability of storing multiple patterns in such an associative memory, a modified algorithm for the dynamic learning process was later developed by Pineda [2]. Two dynamic phenomena in the dynamic learning process were isolated into primitive architectural components which perform the operations of continuous nonlinear transformation and autoassociative recall. The dynamic learning techniques for programming the architectural components were presented in a formalism appropriate for a collective nonlinear dynamic neural system [2]. This dynamic learning process was named dynamic back propagation (DBP) by Narendra [38], [39] due to the application of the gradient descent method. More recently, this method was applied to a nonlinear functional approximation with a dynamic network using a dynamic algorithm for both the synaptic and somatic parameters by Tawel [15]. Some control applications of the DBP learning algorithm in recurrent neural networks may be found in the survey papers [40], [41]. However, the problem of a DBP learning algorithm for discrete-time dynamic neural networks has received little attention in the literature. In the DBP method, the dynamic network is designed using a dynamic learning process so that each given target vector becomes an equilibrium state of the network. The stability is easily ensured for a standard continuous-time Hopfield 1045–9227/99$10.00 1999 IEEE