DOI 10.1007/s11063-005-3094-9 Neural Processing Letters (2005) 22:163–169 © Springer 2005 A Method of Accelerating Neural Network Learning SOTIR SOTIROV Department of Computer Technologies, University “Prof. D-R Asen Zlatarov”, bul “Yakimov” 1, Bourgas 8010, Bulgaria. e-mail: ssotirov@btu.bg Abstract. The article presents of accelerating neural network learning by the Back Propa- gation algorithm and one of its fastest modifications – the Levenberg–Marqurdt method. The learning is accelerated by introducing the ‘single-direction’ coefficient of the change of x for calculating its new values (the number of iterations is decreased by approximately 30%). Simulation results of learning neural networks by applying both the classic method and the method of accelerating the procedure are presented. Key words. Back Propagation, Levenberg–Marqurdt, method of learning, neural networks 1. Introduction There are a lot of variants of the methods of supervised learning of neural net- works [1, 2]. The majority of them employ feed-forward neural network, and the method used in learning is Back Propagation [3, 4] with a finite number of cycles. The algorithm adjusts the network parameters so as to produce the least mean square error [5, 6]. The performance index for the algorithm is F(x) = E[e 2 ] = E[(t - a) 2 ]. (1) where t is a target, and a is the value of the output of the neural network. The square error is: ˆ F(x) = (t(k) - a(k)) T (t(k) - a(k)) = e T (k) · e(k), (2) where the square error is replaced by the iteration square error κ . The weight coefficients and deviations are calculated by the formula x κ +1 = x k - α ˆ F x , (3) where, the parameter vector is x T = [x 1 x 2 ... x n ] = [w 1 1,1 w 1 1,2 ... w S 1 ,R b 1 1 ... b 1 S 1 w 2 1,1 ... b M S M ], W is the weight coefficients and b is the deviations, with the neural net- work having M layers with S M neurons per layer and R inputs, and α is the coeffi- cient of learning. When x = α ˆ F x x k+1 = x k - x (4) is obtained