Symbolic Integration of Neural Classificator G. Tascini, G. Vallesi, A. Montesanto and A.F. Dragoni DEIT - Università Politecnica delle Marche g.tascini@univpm.it Abstract. The article describes a method in order to integrate the sub-symbolic classification, using neural networks, with symbolic rules. The aim of this work is to extract implicit knowledge embedded in neural networks building a hybrid system, the symbolic rules of which coming out clustering synaptic weights. This allows us to develop a diagnostic system that joins neural networks plasticity and symbolic information comprehensibility. The methodology used in this article is based on the ability of Towell and Shavlik Method to extract symbolic rules from a multilayer perceptron. 1 Introduction Many attempts have been made to extract symbolic IF-THEN rules from connectionist systems. Gallants connectionist expert systems [1] and Matrix Controlled Inference Engine (MACIE) [2] are two early models where expert system rules are extracted from a neural network. Many other rule extraction techniques followed, mostly applied to extracting rules from MLPs ([3]; [4]; [5]) with a smaller number applied to Kohonen networks, recurrent networks and radial basis function networks. The difference between the approaches to rule extraction can be categorised as that between decompositional and pedagogical approaches. Decompositional approaches involve analysing weights and links to extract rules, with some requiring specialised weight modification algorithms [6] or network architectures such as an extra hidden layer of units with staircase activation functions [7]. Pedagogical approaches treat the network as a black box and extract rules by observing the relationship between its inputs and outputs. Some of methods lead to the extraction of rules that involve fuzzy grades of membership calculation. A typical approach, for the rule extraction [8], takes advantage of an algorithm based on the exhaustive search to extract conjunctive rules from MLPs. To find rules, the learner first searches for all the combinations of positive conditions that can lead to a conclusion. Then the learner searches for negative conditions that should be added to guarantee the conclusion. In the case of three-layered networks, the learner can extract two separate sets of rules (one for each layer) and then integrate them by substitution. An alternative form of rules is the MofN: IF M of the following N conditions, a 1 ; a 2 ; …; a n , are true, THEN the conclusion b is true. It is argued [9] that some concepts can be better expressed in such a form, and they also help avoid the combinatorial explosion in tree size found with IF-THEN rules. To extract such rules a three- step procedure is used. By first grouping similarly weighted links, eliminating insignificant groups, and then forming rules from the remaining groups through an exhaustive search. The following steps are performed: 1. With each output node, form groups of similarly-weighted links; 2. Set link weights of all group members to the average of the group; 3. Eliminate any group that do not significantly affect the output value;