Feedback Analysis of Radial Basis
Functions Neural Networks
via Small Gain Theorem
⋆
Ali, S. Saad Azhar
∗
Muhammad Shafiq
∗∗
Jamil M. Bakhashwain
∗∗∗
Fouad M. AL-Sunni
∗∗∗∗
∗
Electrical Engineering Department, Air University, E-9 Islamabad,
Pakistan, (email: saadali@mail.au.edu.pk).
∗∗
Department of Electronics Engineering, Ghulam Ishaq Khan
Institute, Topi, Pakistan, (email: mshafiq@giki.edu.pk).
∗∗∗
Electrical Engineering Department, King Fahd University of
Petroleum & Minerals, Dhahran 31261, Saudi Arabia, (email:
jamilb@kfupm.edu.sa).
∗∗∗∗
Systems Engineering Department, King Fahd University of
Petroleum & Minerals, Dhahran 31261, Saudi Arabia, (email:
alsunni@kfupm.edu.sa).
Abstract: Radial basis function neural networks are used in a variety of applications such as
pattern recognition, nonlinear identification, control, time series prediction, etc. In this paper,
feedback analysis of the learning algorithm of radial basis function neural networks is presented.
It studies the robustness of the learning algorithm in the presence of uncertainties that might
be due to noisy perturbations at the input or to modeling mismatch. The learning scheme is
first associated with a feedback structure and then the stability of that feedback structure is
analyzed via small gain theorem. The analysis suggests bounds on the learning rate in order to
guarantee that the learning algorithm will behave as robust nonlinear filters and optimal choices
for faster convergence speeds.
1. INTRODUCTION
Neural networks have been recently used widely in a va-
riety of areas such as pattern recognition, system identi-
fication, filtering, control, time series prediction, etc. Ra-
dial basis function neural networks (RBFNN) are single-
layered feedforward networks with universal approxima-
tion capabilities, in addition to more efficient learning
than the famous multi-layered feedforward neural net-
works (MFNN) Haykin [1999], Jun-Dong et al. [1998],
Finan et al. [1996], Fortuna et al. [2001].
RBFNN are generally trained using supervised learning.
During training, a recursive update procedure is used to
estimate the weights of the RBFNN that best fits the
given data Haykin [1999]. The recursive procedure often
requires to select a suitable adaptation gain called learning
rate. The learning rate should be within an optimum
range. It should neither be too large which would drive the
algorithm unstable, nor too small, that it slows down the
training. In general practice, trial-and-error experiences
are used to select a suitable learning rate for training
phase.
The general and simpler practice has been to choose a
small learning rate that obviously result in slower conver-
gence speeds. Especially, with multivariable systems with
⋆
This work is sponsored by King Fahd University of Petroleum &
Minerals and SABIC under project SABIC 2006-11
many weights and a large data, a small learning rate may
require substantial amount of time and machine power.
Therefore, it should be analyzed to find an optimal learn-
ing rate to speed up the convergence and yet keeping the
algorithm stable. In the robustness analysis of adaptive
schemes Sayed et al. [1996] and Rupp et al. [1995], the
authors have addressed the methods of selecting the learn-
ing rate 1) in order to guarantee a robust behavior in
the presence of noise and modeling uncertainties and 2)
in order to guarantee a faster convergence speeds.
The formulation in Sayed et al. [1996] and Rupp et al.
[1995] emphasizes an intrinsic feedback structure for most
adaptive algorithms and it relies on tools from system
theory, control and signal processing such as state-space
description, feedback analysis, small gain theorem, H
∞
design and lossless systems. The feedback configuration
is provoked via energy arguments and is shown to consist
of two major blocks: a time-variant lossless (i.e., energy
preserving) feedforward path and a time-variant feedback
path.
We make use of the feedback structure to analyze robust-
ness of RBFNN and find optimal choices for learning rate.
In this paper, we present the learning algorithm for the
RBFNN, that involves a nonlinear functional in the update
equation due to the presence of the basis function (usu-
ally a gaussian function) and associate with the feedback
structure of Sayed et al. [1996] and Rupp et al. [1995] in
order to handle the presence of the nonlinearity. As an
Proceedings of the 17th World Congress
The International Federation of Automatic Control
Seoul, Korea, July 6-11, 2008
978-1-1234-7890-2/08/$20.00 © 2008 IFAC 7463 10.3182/20080706-5-KR-1001.1686