OMBP: Optic Modified BackPropagation training algorithm for fast
convergence of Feedforward Neural Network
Omar Charif
12+
, Hichem Omrani
1
, and Philippe Trigano
2
1
CEPS/INSTEAD, Differdange, Luxembourg
2
University of Technology of Compiegne, UTC, France
Abstract: In this paper, we propose an algorithm for a fast training and accurate prediction for
Feedforward Neural Network (FNN). In this algorithm OMBP, we combine Optic Backpropagation (OBP)
with the Modified Backpropagation algorithm (MBP). The weights are initialized using Yam and Chow
algorithm to insure the stability of OMBP and to reduce its sensitivity against initial settings. The proposed
algorithm had shown an upper hand over three different algorithms in terms of number of iterations, time
needed to reach the convergence and the accuracy of the prediction. We have tested the proposed algorithm
on several benchmark data and compared its results with those obtained by applying the standard
BackPropagation algorithm (SBP), Least Mean Fourth algorithm (LMF), and the Optic Backpropagation
(OBP). The criterions used in the comparisons are: number of iterations, and time needed for convergence,
the error of prediction, and percentage of trials failed to converge.
Keywords: Artificial Neural Network(ANN), Pattern Recognition, Algorithms and Techniques.
1. Introduction:
For decades, researchers have devoted a lot of effort to understand and imitate the functionality of the
human brain. This work led to the creation of the well known artificial intelligence technique Artificial
Neural Network (ANN). Researchers from various scientific domains (e.g. networking, biology, medicine,
social sciences, and statistic) have successfully implemented ANN models to compute all kind of functions.
Neural networks are considered a nonlinear adaptive system which is able, after learning, to generalize. A
well built neural network model has been considered as universal approximators capable of performing any
linear and nonlinear computation. Multilayer perceptron has been applied to various problems. Researchers
have used several approaches to train multilayer networks, some of them make use of artificial intelligence
techniques (e.g. genetic algorithm [Montana&Davis, 1989], simulated annealing [Sexton et al., 1999], and
particle swarm optimization [zhang et al., 2007]). The Backpropagation is the most used algorithm for
training a multilayer network. Despite the popularity of this method, it has a lot of downsides: it is slow to
converge, and it doesn’t ensure the convergence to global minima. Recently, researchers have invested in
improving the Backpropagation algorithm, focusing on the following areas of improvement:
The development of new advanced algorithms: Various modifications to SBP algorithm have been
proposed such as Modified Backpropagation [Abid et al., 2001], LMF [Abid and Fnaiech, 2008], OBP
[Otaire and Salameh, 2004]. Researchers have also developed new methods for multilayer network training.
These methods and many other have achieved significant improvements in terms of the number of epochs
(i.e. neural network iteration) and time needed to reach the convergence.
+
Corresponding author. Tel.: +352 58 58 55 315; fax: +352 58.55.60
E-mail address: omar.charif@ceps.lu.
2011 International Conference on Telecommunication Technology and Applications
Proc .of CSIT vol.5 (2011) © (2011) IACSIT Press, Singapore
132