Proceeding of the 5
th
International Symposium on Mechatronics and its Applications (ISMA08), Amman, Jordan, May 27-29,2008
IMPROVEMENT OF LEARNING RATE FOR RBF NEURAL NETWORKS
IN A HELICOPTER SOUND IDENTIFICATION SYSTEM INTRODUCING
TWO-PHASE OSD LEARNING METHOD
Gh. A. Montazer
Tarbiat Modares University
School of Engineering
P.O. Box: 14115-179 - Tehran, Iran
rnontazer@modares.ac.ir
Reza Sabzevari
Islamic Azad University of Qazvin
School of Engineering
Member of Young Researchers' Club eYRC)
sabzevari@gmail.com
Fatemeh Ghorbani
Tarbiat Modares University
School of Basic Sciences
f_ghorbani200S@yahoo.com
ABSTRACT
This paper presents a novel approach in learning algorithms
commonly used for training radial basis function neural networks.
This approach could be used in applications which need real-time
capabilities for retraining RBF neural networks. Proposed method
is a Two-Phase Learning Algorithm which optimizes the
functionality of Optimum Steepest Decent (OSD) learning
method. This methodology speeds to attain better performance by
initial calculation of centre and width of RBF units. This method
has been tested in an audio processing application, a system for
identifying helicopters using their sound of rotors. Comparing
results obtained by employing different learning strategies shows
interesting outcomes as have come in this paper.
1. INTRODUCTION
Radial basis function (REF) networks were introduced into the
neural network literature by Broomhead and Lowe in 1988 [1].
These networks have been extensively used for interpolation
regression and classification due to their universal approximation
properties and simple parameters estimation [2]. The theoretical
basis of the RBF approach lies in the field of interpolation of
multivariate functions. According to this viewpoint, the learning
process could be deemed as finding a surface in a
multidimensional space fitted on train data. The criterion to find
the "best fitted surface" uses some statistical computations.
According to different applications of RBF Neural Networks,
there are a wide variety of learning strategies that have been
proposed in literatures for changing the parameters of a RBF
network. These strategies are of two main categories. The first
category contains strategies in which centers and variances of the
network are changed, including:
1) Fixed centers selected at random [3]
2) Self-organized selection of centers, containing [4]:
a) K-Mean clustering procedure,
b) The self-organizing feature map clustering procedure,
978-1-4244-2034-6/08/$25.00 ©2008 IEEE.
3) Supervised selection of centers [3],
4) Supervised selection of centers and variances [5].
The second category includes strategies in which the weights of
the network are changed, containing:
1) The pseudo-inverse (minimum-norm) method [6]
2) The Least-Mean-Square (LMS) method [7]
3) The Steepest Decent (SD) method [8]
4) The Quick Propagation (QP) method [9]
5) Optimized version of previous methods [9]: including
General Optimum Steepest Decent (GOSD), Optimum Steepest
Decent (OSD) and Optimum Quick Propagation (OQP).
In our previous work [9], we have presented a set of modified
learning methods improving the classical ones. In this paper, we
introduce a two-phase learning strategy benefiting one of modified
methods, Optimum Steepest Decent (OSD), for RBF networks. In
other words, this method is a hybridization of OSD learning
method and the classical two-phase learning of the REF network.
The organization of this paper is as follows: employing OSD
method in two-phase learning is described in section two. Section
three presents the implementation of proposed method on several
benchmark data sets. Also discussions on the performance of this
method in comparison with previous ones are come in this
section. And finally in section four, conclusion of this work is
presented.
2. TWO-PHASE LEARNING FOR RBF NETWORKS
In this approach the learning process is divided in two consequent
steps. The first step consists of determining centers of hidden units
and the center widths. The second step is a supervised learning.
The only parameters to be set are the weights between hidden and
output layers representing coefficients of linear combinations of
RBF unit outputs. The objective is to minimize the overall error
function with respect to these weights.
Assume <P ij = rp j (x i) as the outcome of j-th radial basis
function with i-th element of X, as input vector, and Yij as the j-th