* Corresponding author. Centre for Imaging Science and Technologies, Halmstad University, Box 823, S 301 18 Halmstad, Sweden. Tel.: #46-35-167-140; fax: #46-35-216-724. E-mail address: av@cist.hh.se (A. Verikas) Neurocomputing 30 (2000) 153}172 Training neural networks by stochastic optimisation A. Verikas*, A. Gelzinis Centre for Imaging Science and Technologies, Halmstad University, Box 823, S 301 18 Halmstad, Sweden Department of Applied Electronics, Kaunas University of Technology, Studentu 50, 3031, Kaunas, Lithuania Received 13 October 1997; accepted 22 March 1999 Abstract We present a stochastic learning algorithm for neural networks. The algorithm does not make any assumptions about transfer functions of individual neurons and does not depend on a functional form of a performance measure. The algorithm uses a random step of varying size to adapt weights. The average size of the step decreases during learning. The large steps enable the algorithm to jump over local maxima/minima, while the small ones ensure convergence in a local area. We investigate convergence properties of the proposed algorithm as well as test the algorithm on four supervised and unsupervised learning problems. We have found a superiority of this algorithm compared to several known algorithms when testing them on generated as well as real data. 2000 Elsevier Science B.V. All rights reserved. Keywords: Stochastic optimisation; Neural networks; Simulated annealing 1. Introduction Arti"cial neural networks have proved themselves to be very useful in various applications, because they can represent complex classi"cation or mapping functions and discover the representations using powerful learning algorithms. An optimal set of weights for de"ning the functions is learned by minimising an error 0925-2312/00/$ - see front matter 2000 Elsevier Science B.V. All rights reserved. PII: S 0 9 2 5 - 2 3 1 2 ( 9 9 ) 0 0 1 2 3 - X