Proceedings of the 2005 Informing Science and IT Education Joint Conference Flagstaff, Arizona, USA – June 16-19 Speeding Up Back-Propagation Neural Networks Mohammed A. Otair Jordan University of Science and Technology, Irbed, Jordan Walid A. Salameh Princess Summaya University for Science and Technology, Amman, Jordan otair@just.edu.jo walid@psut.edu.jo Abstract There are many successful applications of Backpropagation (BP) for training multilayer neural networks. However, it has many shortcomings. Learning often takes long time to converge, and it may fall into local minima. One of the possible remedies to escape from local minima is by using a very small learning rate, which slows down the learning process. The proposed algorithm pre- sented in this study used for training depends on a multilayer neural network with a very small learning rate, especially when using a large training set size. It can be applied in a generic manner for any network size that uses a backpropgation algorithm through an optical time (seen time). The paper describes the proposed algorithm, and how it can improve the performance of back- propagation (BP). The feasibility of proposed algorithm is shown through out number of experi- ments on different network architectures. Keywords: Neural Networks, Backpropagation, Modified backprpoagation, Non-Linear function, Optical Algorithm. Introduction The Backpropagation (BP) algorithm (Rumelhart, Hinton, & Williams, 1986; Rumelhart, Durbin, Golden, & Chauvin, 1992) is perhaps the most widely used supervised training algorithm for multi-layered feedforward neural networks. However, in some cases, the standard Backpropaga- tion takes unendurable time to adapt the weights between the units in the network to minimize the mean squared errors between the desired outputs and the actual network outputs (Callan, 1999; Carling, 1992; Freeman, &, Skapura, 1992; Hakin, 1999; Maureen, 1993). There has been much research proposed to improve this algorithm; some of this research was based on the adaptive learning parameters, e.g. the Quickprop (Fahlman, 1988), the RPROP (Riedmiller, & Braun, 1993), delta-bar-delta rule (Jacobs, 1988), and Extended delta-bar-delta rule (Minai, 1990). Combinations of different techniques can often lead to an improvement in global optimization methods (Hagan, 1996; Lee, 1991). This paper presents an optical backpropagation (OBP) algorithm, with analysis of its benefits. An OBP algorithm is designed to over- come some of the problems associated with standard BP training using non- linear function, which applied on the output units. One of the important as- pects of the proposed algorithm is its ability to escape from local minima with high speed of convergence dur- ing the training period. In order to Material published as part of these proceedings, either on-line or in print, is copyrighted by Informing Science. Permission to make digital or paper copy of part or all of these works for personal or classroom use is granted without fee provided that the copies are not made or distributed for profit or commercial advantage AND that copies 1) bear this notice in full and 2) give the full citation on the first page. It is permissible to abstract these works so long as credit is given. To copy in all other cases or to republish or to post on a server or to redistribute to lists requires specific permission from the publisher at Publisher@InformingScience.org