ELSEVIER Physica A 242 (1997) 27-37 PHYSICA A multi-interacting perceptron model with continuous outputs R.M.C. de Almeida*, E. Botelho Instituto de Fisica, Universidade Federal do Rio Grande do Sul, Caixa Postal 15051, 91501-970 Porto Alegre, RS, Brazil Received 28 January 1997 Abstract We consider learning and generalization of real functions by a multi-interacting feed-forward network model with continuous outputs with invertible transfer functions. The expansion in dif- ferent multi-interacting orders provides a classification for the functions to be learnt and suggests the learning rules, that reduce to the Hebb-learning rule only for the second order, linear per- ceptron. The over-sophistication problem is straightforwardly overcome by a natural cutoff in the multi-interacting synapses: the student is able to learn the architecture of the target rule, that is, the simpler a rule is, the faster the multi-interacting perceptron may leam. Simulation results are in excellent agreement with analytical calculations. PACS: 87.10+e; 05.90; 02.70 Neural networks is an especially rich research field for either theoretical reasons or technological applications. In particular a feed-forward network consists of an output unit So and an input layer, made of N binary units whose states may be represented by S---(Sl, $2 ..... SN). The connections between So and S may vary a lot, depending on the architecture of the network. For example, in the simple perceptron model there is a direct coupling Ji between So and each input unit Si with i= 1,2 ..... N, or in multilayer networks there may be intermediary layers where hidden units connect the preceding layer to the subsequent one. Anyway, when the net is fed with some data, represented by the state S of the input layer, the dynamics intrinsic to each net leads the output unit to a state So. Hence, each feed-forward network emulates some rule or function f(S) --- So. Feed forward networks may be devised to learn a teacher rule from a set of examples, without its explicit formulation by the programmer. This is implemented by using a training set of P examples of a teacher rule g, that is, the set {a~,,S u } for # = 1,2 ..... P, * Corresponding author. E-mail: rita@if.ufrgs.br. 0378-4371/97/$17.00 Copyright (~) 1997 Elsevier Science B.V. All rights reserved PH S0378-4371 (97)00188-X