AbstractIn this paper a modification on Levenberg-Marquardt algorithm for MLP neural network learning is proposed. The proposed algorithm has good convergence. This method reduces the amount of oscillation in learning procedure. An example is given to show usefulness of this method. Finally a simulation verifies the results of proposed method. KeywordsLevenberg-Marquardt, modification, neural network, variable learning rate. I. INTRODUCTION HE Error Back Propagation (EBP) algorithm [1]–[4] has been a signification improvement in neural network research, but it has a weak convergence rate. Many efforts have been made to speed up EBP algorithm [5]–[9]. All of these methods lead to little acceptable results. The Levenberg-Marquardt (LM) algorithm [4], [10]–[13] ensued from development of EBP algorithm dependent methods. It gives a good exchange between the speed of the Newton algorithm and the stability of the steepest descent method [11], that those are two basic theorems of LM algorithm. An attempt has been made to speed up LM algorithm with modified performance index and gradient computation [14], although it is unable to reduce error oscillation. Other effort with variable decay rate has been ensued to reduce error oscillation [15], but offered algorithm had low speed compared standard LM algorithm. In this paper a modification is made on Learning parameter resulted in to decrease together both learning iteration and oscillation. A modification method by varying the learning parameter has been made to speed up LM algorithm. In addition, the error oscillation has been decreased. Section II describes the LM algorithm. Section III the proposed form of the modification on learning parameter is introduced. In section IV a simulation is discussed. II. THE LEVENBERG-MARQURADT METHOD REVIEW In the EBP algorithm, the performance index F(w) to be Amir Abolfazl suratgar is Assistant professor in the Electrical Engineering Department, University of Arak, Arak, Iran. (phone: +98-861-22 25 946; fax: +98-861-22 25 946; e-mail: a-surtagar@ araku.ac.ir). Mohammad Bagher Tavakoli is Msc. Student of Electrical Engineering Department of Azad University, Arak, Iran (e-mail: m-tavakoli@ iau- arak.ac.ir). Abbas Hoseinabadi is Msc. Student of Electrical Engineering Department of Azad University, Arak, Iran (e-mail: a-hoseinabadi@ iau-arak.ac.ir). minimized is defined as the sum of squared errors between the target outputs and the network's simulated outputs, namely: e e w F T = ) ( (1) Where w = [w1, w2, …., w N ] consists of all weights of the network, e is the error vector comprising the error for all the training examples. When training with the LM method, the increment of weights w can be obtained as follows: [ ] e J I J J w T T 1 + = µ (2) Where J is the Jacobian matrix, μ is the learning rate which is to be updated using the β depending on the outcome. In particular, μ is multiplied by decay rate β (0<β<1) whenever ) ( w F decreases, whereas μ is divided by β whenever ) ( w F increases in a new step. The standard LM training process can be illustrated in the following pseudo-codes, 1. Initialize the weights and parameter μ (μ=.01 is appropriate). 2. Compute the sum of the squared errors over all inputs ) ( w F . 3. Solve (2) to obtain the increment of weights w 4. Recomputed the sum of squared errors ) ( w F Using w + w as the trial w, and judge IF trial ) ( ) ( w F w F < in step 2 THEN w w w + = ) 1 . ( = = β β µ µ Go back to step 2 ELSE β µ µ = go back to step 4 END IF III. MODIFICATION OF THE LM METHOD Considering performance index is e e w F T = ) ( using the Newton method we have as: Amir Abolfazl Suratgar, Mohammad Bagher Tavakoli, and Abbas Hoseinabadi Modified Levenberg-Marquardt Method for Neural Networks Training T World Academy of Science, Engineering and Technology International Journal of Computer and Information Engineering Vol:1, No:6, 2007 1745 International Scholarly and Scientific Research & Innovation 1(6) 2007 ISNI:0000000091950263 Open Science Index, Computer and Information Engineering Vol:1, No:6, 2007 publications.waset.org/7479/pdf