A Constructive Technique Based on Linear Programming for Training Switching Neural Networks Enrico Ferrari and Marco Muselli Institute of Electronics, Computer and Telecommunication Engineering Italian National Research Council via De Marini, 6 - 16149 Genoa, Italy {ferrari,muselli} @ieiit.cnr.it Abstract. A general constructive approach for training neural networks in classification problems is presented. This approach is used to con- struct a particular connectionist model, named Switching Neural Net- work (SNN), based on the conversion of the original problem in a Boolean lattice domain. The training of an SNN can be performed through a constructive algorithm, called Switch Programming (SP), based on the solution of a proper linear programming problem. Simulation results ob- tained on the StatLog benchmark show the good quality of the SNNs trained with SP. Keywords: Switching Neural Network, constructive technique, positive Boolean function, Switch Programming. 1 Introduction Backpropagation algorithms [1] have scored excellent results in the training of neural networks for classification. In general any technique for the solution of classification problems consists of two steps: at first a class Γ of functions is selected (model definition), then the best classifier g ∈ Γ is retrieved (training phase). The choice of Γ must take into account two considerations. If the set Γ is too large, it is likely to incur in the problem of overfitting: the optimal classifier g ∈ Γ has a good behavior on the examples of the training set, but scores a high number of misclassifications on the other points of the input domain. On the contrary, the choice of a small set Γ prevents from retrieving a function with a sufficient level of accuracy on the training set. In a multilayer perceptron the complexity of the set Γ depends on some topological properties of the network, such as the number of hidden layers and neurons. The drawback of this approach is that the architecture of the neural network must be chosen by the user before the training phase, often without any prior information. In order to avoid the drawbacks related to backpropagation algorithms, two different approaches can be introduced: pruning methods and constructive tech- niques [2]. Pruning methods consider an initial trained neural network with a V. Kurkova, R. Neruda, and J. Koutnik (Eds.): ICANN 2008, Part II, LNCS 5164, pp. 744–753, 2008. c Springer-Verlag Berlin Heidelberg 2008