Sleep Learning and Max-Min Aggregation of Evolving Connectionist Systems Michael J. Watts Information Technology Programme Auckland Institute of Studies Auckland New Zealand Email: mjwatts@ieee.org Abstract—This paper describes two new algorithms for opti- mising the structure of trained Evolving Connectionist System (ECoS) artificial neural networks (ANN). It also presents the results of preliminary empirical evaluations of the algorithms. While ECoS are fast and efficient constructive ANN algorithms they can lose efficiency if they are allowed to grow too large. The algorithms presented in this paper reduce the size of a trained ECoS while retaining the knowledge that the ECoS has learned. That is, they remove redundant elements of the ECoS structure in such a way that the performance of the network is not reduced. The experimental evaluations showed that each algorithm is capable of achieving this to a different degree over different data sets. Optimisation of the parameters of one of the algorithms using an evolutionary algorithm yielded better results. While the work reported in this paper is preliminary, the results are promising and the algorithms have the potential to enhance the usefulness of ECoS ANN. I. I NTRODUCTION Evolving Connectionist Systems (ECoS) are a family of constructive neural networks [1]. They are based on the following principles as stated in [2]: 1) ECoS learn fast from a large amount of data through one-pass training; 2) ECoS adapt in an on-line mode where new data is incrementally accommodated; 3) ECoS memorise data exemplars for a further refinement, or for information retrieval; 4) ECoS learn and improve through active inter- action with other systems and with the environ- ment in a multi-modular, hierarchical fashion; The simplest of the ECoS algorithms is the Simple Evolving Connectionist System (SECoS) [3], [4]. This consists of three layers of neurons: the input layer; the evolving layer; and the output layer. Neurons are added to the evolving layer during learning, and the activation of the evolving layer neurons is based on the distance between the current input vector and the evolving layer neuron’s incoming weight vector. SECoS have been applied to a variety of problems, including phoneme recognition [5], [6], computer network security [7] and predicting outbreaks of insect pests [8], [9]. While SECoS are likely not capable of solving the kind of complex problems that deep-learning convolutional neural networks are, they do have some key advantages: 1) They are fast learning 2) They are computationally efficient in recall These make SECoS useful for applications in resource- constrained environments. However, the advantage of being computationally efficient is lost if the evolving layer is allowed to grow too large. This can happen when a large number of training examples have been presented to the SECoS. Due to the local-learning employed by the training algorithm, it is possible that many of the evolving layer neurons are actually redundant [10]. That is, the information represented by the neuron is or can be represented by another neuron. Previous work has described different approaches to deal- ing with this problem. One way is to optimise the training parameters via evolutionary algorithm [11]–[14]. This has the disadvantages of firstly requiring training and testing data sets with which to gauge the performance of the trained ECoS, and secondly of being computationally intensive due to the number of evaluations required by an EA. Another approach was via neuron aggregation [4]. This involves finding groups of evolving layer neurons that are close together, then combining them into a single neuron. This has the advantages of not requiring any external data sets and also being computationally efficient. This paper introduces two alternative methods for optimis- ing trained SECoS networks, one based on the principle of sleep learning and the other on data clustering of neurons. Both of these methods are computationally efficient, and do not require external data sets. II. SECOSLEARNING The ECoS learning algorithm is based on accommodating within the evolving layer new training examples, by either modifying the weight values of the connections attached to the evolving layer neurons, or by adding a new neuron to that layer. The algorithm employed is described in Figure 1. When a neuron is added, its incoming connection weight vector is set to the input vector I , and its outgoing weight vector is set to the desired output vector O d .