Substance Use & Misuse, 33(2), 495–501, 1998
495
Copyright © Semeion 1998
Self-Recurrent Neural Network
Massimo Buscema, Dr.
Semeion Research Center of Sciences of Communication, Viale di Val Fiorita 88,
Rome, Italy
Feed Forward Artificial Neural Networks of the Back Propagation family
have both a weakness and a strength in their makeup: their layer of hidden
units encodes input vectors in a manner that is inclined to be distributed.
This type of encoding is a strong point of these ANNs since it is a very
efficient encoding system from a computational viewpoint. Even from a
neuro-biological viewpoint, the memorisation system is plausible.
But precisely because of its power, this type of input vector codification
is practically uncontrollable. There are many ways through which hidden
units encode input vectors. Which of them is the most efficient on the basis
of the relationships that each input variable has with every other such
variable?
The ideal answer to this question consists in allowing the hidden layer to
also encode its own codification of the input vector:
(1) y fx () generic ANN function with input x and
output y vectors
(2) y fgx ( ( )) g(x ) is the output vector of the ANN’s
hidden layer
(3) x x gx () x is the ANN’s input vector and x is the
new input vector (extended input)
(4) y fgx gx (( ( ))) Final Recurrent Equation
Equation (4) makes provision for the ANN’s layer of hidden units to