ARTICLE IN PRESS UNCORRECTED PROOF Please cite this article in press as: N.S. Jaddi et al., A solution representation of genetic algorithm for neural network weights and structure, Inf. Process. Lett. (2015), http://dx.doi.org/10.1016/j.ipl.2015.08.001 JID:IPL AID:5318 /SCO [m3G; v1.159; Prn:14/08/2015; 14:58] P.1(1-4) Information Processing Letters ••• (••••) •••–••• Contents lists available at ScienceDirect Information Processing Letters www.elsevier.com/locate/ipl 1 62 2 63 3 64 4 65 5 66 6 67 7 68 8 69 9 70 10 71 11 72 12 73 13 74 14 75 15 76 16 77 17 78 18 79 19 80 20 81 21 82 22 83 23 84 24 85 25 86 26 87 27 88 28 89 29 90 30 91 31 92 32 93 33 94 34 95 35 96 36 97 37 98 38 99 39 100 40 101 41 102 42 103 43 104 44 105 45 106 46 107 47 108 48 109 49 110 50 111 51 112 52 113 53 114 54 115 55 116 56 117 57 118 58 119 59 120 60 121 61 122 A solution representation of genetic algorithm for neural network weights and structure Najmeh Sadat Jaddi, Salwani Abdullah, Abdul Razak Hamdan Data Mining and Optimization Research Group (DMO), Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, National University of Malaysia, Malaysia article info abstract Article history: Received 20 July 2014 Received in revised form 17 July 2015 Accepted 6 August 2015 Available online xxxx Communicated by S.M. Yiu Keywords: Artificial neural network training Optimization of weights and structure Genetic algorithm Time series prediction This paper presents a new solution representation for genetic algorithm to optimize the neural network model. During the optimization process, the weights, biases and structure of the neural network are considered for altering. The quality of the model is examined by a cost function that deliberates over both minimization of error and complexity of the neural network model. The performance of the proposed method is investigated by applying it on two time series prediction problems. The results show promising results when we compare it with other methods in the literature. 2015 Published by Elsevier B.V. 1. Introduction Over the past years, the demand of Artificial Neural Network (ANN) training in many areas has been grow- ing [1,4,5,14]. The main reason is the nonlinearity of the artificial neural network. Selection of the proper weights, number of layers and nodes in each layer is the most chal- lenging issue in ANN models. The number of layers and nodes affect the complexity of ANN model and therefore increasing the difficulty for the training process. In this case an economical ANN is required, because a very small network may not be able to characterize the real state due to its limited potential, while for a huge network besides making its process complex, it may provide noise in the training data and therefore fail to present its superior ca- pability [7]. Sexton et al. [12] applied tabu search for ANN train- ing. Later on, Sexton et al. [13] used simulated anneal- ing and genetic algorithm (GA) for the same problem. Hill E-mail address: najmehjaddi@gmail.com (N.S. Jaddi). climbing algorithm was used for training the neural net- work in [2]. A hybrid Taguchi-genetic algorithm was em- ployed in [6]. The most common learning algorithm is the back-propagation; however, it leads to noisy fitness evalu- ation which is the main disadvantage of this technique [1]. In recent years researchers were interested to produce ideas of merging ANN with other search algorithms for superior performance neural networks [6,8–10]. Ludermir et al. [7] applied hybridization of simulated annealing and tabu search to optimize the weights and connection of the neural network. Subsequently, they extended their studies by applying the hybridization of the simulated annealing, tabu search and genetic algorithm [15]. In this paper, a genetic algorithm based dynamic neu- ral network (GADNN) is proposed to preside over both the performance and the complexity of the neural network for training process. This method provides the opportunity of checking different weights, biases, number of hidden lay- ers, number of nodes and selected inputs during the search process. Therefore, it has the chance to find an effective model with less prediction error and less complexity. http://dx.doi.org/10.1016/j.ipl.2015.08.001 0020-0190/ 2015 Published by Elsevier B.V.