AbstractThis paper presents an energy back-propagation algorithm (EBP). Learning and convergence processes of the standard backpropagation algorithm (SBP) are based on the energy function. The energy function is used with the convergence process to extract the nearest image for the unknown tested image. The EBP algorithm shows considerably better performance in terms of time of learning, time of convergence, and size of input image compared to the SBP algorithm. Index TermsArtificial neural networks, backpropagation algorithm, energy function, pattern recognition. I. INTRODUCTION Artificial neural networks have been successfully applied to problems in pattern classification, function approximation, optimization, pattern matching and associative memories [12]. One of the most popular neural networks is the layered feedforward neural network with a backpropagation (BP) least-mean-square learning algorithm [13]. Multilayer feed forward networks trained using the backpropagation learning algorithm [14]. The network edges connect the processing units called neurons. With each neuron input there is associated a weight, representing its relative importance in the set of the neuron's inputs. The inputs' values to each neuron are accumulated through the net function to yield the net value: the net value is a weighted linear combination of the neuron's inputs' values [15]. A backpropagation net can be used to solve problems in many areas [5]. But, the backpropagation algorithm has the limitation of slow convergence [17] and lengthy training cycles [8]. In order to overcome those drawbacks of the standard backpropagation (SBP) algorithm, the energy backpropagation (EBP) algorithm is proposed in this research. The EBP algorithm adapts the following principles: (1) doing the learning and convergence processes for parts of the image and not all, (2) using small size of net will reduce size of learning weights matrices of the learning process, and (3) using the energy function based on Hopfield neural network will help converging to the correct image in high efficiency. Thus, the EBP algorithm will be efficient and accurate. Manuscript received April 2, 2007. Ahmad Hashim Hussein Aal-Yhia is with the Post-Graduate Institute for Accounting and Financial Studies, University of Baghdad, Baghdad, Iraq (corresponding author to provide phone: 00964-1-7780170; fax: 00964-1-7780306; e-mail: fingerprint192003@yahoo.com). Dr. Ahmad Sharieh, is dean of King Abdullah II School for Information Technology, University of Jordan, Amman, Jordan (e-mail: sharieh@ju.edu.jo). Fig. 1. Backpropagation neural network with one hidden layer [5]. A. Standard backpropagation algorithm The feed forward backpropagation (FFBP) network is a very popular model in neural networks. It does not have feedback connections, but errors are backpropagated during training. Least mean squared error (LMST) is used. Many applications can be formulated for using (FFBP) network, and the methodology has been a model for most multilayer neural networks. Errors in the output determine measures of hidden layer output errors, which are used as a basis for adjustment of connection weights between the input and hidden layers. Adjusting the two sets of weights between the pairs of layers and recalculating the outputs is an iterative process that is carried on until the errors fall below a tolerance level. Learning rate parameters scale the adjustments to weights. A momentum parameter can also be used in scaling the adjustments from a previous iteration and adding to the adjustments in the current iteration [16]. B. Architecture A multilayer neural network with one layer of hidden units (the Z units) is shown in Fig. 1. The output units (the Y units) and the hidden units also may have biases as shown in Fig. 1. The bias on a typical output unit Y k is denoted by w ok; the Z j is denoted v oj . These bias terms act like weights on connections from units whose output is always 1. Only the direction of information flow for the feedforward phase of operation is shown. During the backpropagation phase of learning, signals are sent in the reverse direction [5]. C. Training Algorithm The backpropagation training algorithm is an iterative gradient algorithm designed to minimize the mean square error (MSE) between the actual output of a multilayer feedforward perceptron and the desired output. It requires continuous differentiable non-linearities. The following assumes a sigmoid logistic nonlinearity [10]. Step 1. Initialize weights and offsets Set all weights and node offsets to small random An Energy Backpropagation Algorithm Ahmad Hashim Hussein Aal-Yhia, and Ahmad Sharieh, Member, IAENG Proceedings of the World Congress on Engineering 2007 Vol I WCE 2007, July 2 - 4, 2007, London, U.K. ISBN:978-988-98671-5-7 WCE 2007