Hybrid Heuristics for Optimal Design of Artificial Neural Networks Ajith Abraham & Baikunth Nath Gippsland School of Computing & Information Technology Monash University, Churchill 3842, Australia {Email: Ajith.Abraham, Baikunth.Nath@infotech.monash.edu.au} ABSTRACT: Designing the architecture and correct parameters for the learning algorithm is a tedious task for modeling an optimal Artificial Neural Network (ANN), which is smaller, faster and with a better generalization performance. In this paper we explain how a hybrid algorithm integrating Genetic algorithm (GA), Simulated Annealing (SA) and other heuristic procedures can be applied for the optimal design of an ANN. This paper is more concerned with the understanding of current theoretical developments of Evolutionary Artificial Neural Networks (EANNs) using GAs and how the proposed hybrid heuristic procedures can be combined to produce an optimal ANN. The proposed meta-heuristic can be regarded as a general framework for adaptive systems, that is, systems that can change their connection weights, architectures and learning rules according to different environments without human intervention. Keywords: Artificial neural networks, simulated annealing, genetic algorithm, evolutionary artificial neural network and genetic annealing. 1. Introduction Conventional design of ANNs requires the user to specify the number of neurons, their distribution over several layers and interconnection between them. Several methods have been proposed to automatically construct ANNs for reduction in network complexity that is to determine the appropriate number of hidden units, layers, etc. Topological optimization algorithms such as Extentron [7], Upstart [3], Pruning [18] and Cascade Correlation [8] etc. got its own limitations. The interest in evolutionary search procedures for designing ANN architecture has been growing in recent years as they can evolve towards the optimal architecture without outside interference, thus eliminating the tedious trial and error work of manually finding an optimal network [1]. GA and SA, which are the most general purpose optimization procedures are increasingly being applied independently to a diverse spectrum of problem areas. For a long time theoretical investigators in SA and GA have focused on developing a hybrid algorithm that employs the good properties and performance of both GA and SA [2, 5]. In certain situations GA outperformed SA and vice versa. GAs are not designed to be ergodic and cover the space in a maximally efficient way. But the prime benefit of GAs is the parallalization capability. In contrast SA is largely sequential in moving from one optimal value to the next. States must be sampled sequentially, for acceptability