International Scholarly Research Network ISRN Applied Mathematics Volume 2011, Article ID 145801, 12 pages doi:10.5402/2011/145801 Research Article Lyapunov Stability Analysis of Gradient Descent-Learning Algorithm in Network Training Ahmad Banakar Mechanical Agriculture Department, Tarbiat Modares University, Tehran, P.O. Box 14115-336, Iran Correspondence should be addressed to Ahmad Banakar, ah banakar@modares.ac.ir Received 17 March 2011; Accepted 13 May 2011 Academic Editors: J.-J. Ruckmann and L. Simoni Copyright q 2011 Ahmad Banakar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Lyapunov stability theorem is applied to guarantee the convergence and stability of the learning algorithm for several networks. Gradient descent learning algorithm and its developed algorithms are one of the most useful learning algorithms in developing the networks. To guarantee the stability and convergence of the learning process, the upper bound of the learning rates should be investigated. Here, the Lyapunov stability theorem was developed and applied to several networks in order to guaranty the stability of the learning algorithm. 1. Introduction Science has evolved from an attempt to understand and predict the behavior of the universe and the systems within it. Much of this owes to the development of suitable models, which agree with the observations. These models are either in a symbolic form which the humans use or in mathematical form that are found from physical laws. Most systems are causal, which can be categorized as either static, where the output depends on the current inputs, or dynamic, where the output depends on not only the current inputs but also past inputs and outputs. Many systems also possess unobservable inputs, which cannot be measured, but affect the system’s output, that is, time series systems. These inputs are known as disturbances and aggravate the modeling process. To cope with the complexity of dynamic systems, there have been significant developments in the field of artificial neural network during last three decades which have been applied for identification and modeling 1–5. One major issue that instigates for proposing these different types of networks is to predict the dynamic behavior of many complex systems existing in nature. ANN is a powerful method in approximating a nonlinear system and mapping between input and output data 1. Recently, wavelet neural networks WNNs have been introduced 6–10. Such types of networks employ wavelets as the activation function in a hidden layer. Because of the ability of the localized analysis