A Growth Algorithm for Neural Network Decision Trees Mostefa Golea and Mario Marchand * Department of Physics, University of Ottawa, 34 G. Glinski, Ottawa, Canada K1N-6N5 PACS. 02.70 - Computational techniques. PACS. 87.10 - General, theoretical, and mathematical biophysics (inc. logic of biosystems, cybernetics and bionics). Accepted by Europhysics Lett.; March 12, 1990 Abstract This paper explores the application of neural network principles to the construction of decision trees from examples. We consider the problem of constructing a tree of perceptrons able to execute a given but arbitrary Boolean function defined on N i input bits. We apply a sequential (from one tree level to the next) and parallel (for neurons in the same level) learning procedure to add hidden units until the task in hand is performed. At each step, we use a perceptron-type algorithm over a suitable defined input space to minimise a classification error. The internal representations obtained in this way are linearly separable. Preliminary results of this algorithm are presented. 1 Introduction Feed-forward layered neural networks[1, 2] are in principle able to learn any arbitrary mapping provided that enough hidden units are present[3]. A way to improve the performance of a neural network is to match its topology to a specific task as closely as possible. However, the determination of the optimal number of hidden units and the optimal net topology is still an open question. More than that, it has been shown recently [4, 5] that the problem of deciding whether or not a given mapping can be performed by a given architecture is NP-complete. Contrary to the standard procedures like back-propagation[1], * E-mail: MMMSJ@UOTTAWA.BITNET