J. Phys. A: Math. Gen. 19 (1986) L617-M20. Printed in Great Britain LE’ITER TO THE EDITOR A memory which forgets Giorgio Parisi Dipartimento di Fisica, I1 Universita di Roma ‘Tor Vergata’, Via Orazio Raimondo, Roma 00173, Italy and INFN, sezione di Roma, Italy Received 7 March 1986 Abstract. The model of Hopfield for a neural network with associative memory is modified by the introduction of a maximum value for the synaptic strength; in this way old patterns are automatically forgotten and the memory recalls only the most recent ones. If the parameters are correctly chosen, the memory never goes into the state of total confusion characteristic of the Hopfield model. In recent years the mechanism for which neural networks behave as associative memories (more precisely as content-addressable memories) has been extensively studied. It seems that considerable progress in this field was made by the introduction of very stylised models which are far from realistic; the advantage of these models is the possibility of performing simple computer simulations and of doing analytic studies; in this way we hope to clarify the basic issues of the theory of networks with associative memory. A very interesting model has been proposed by Hopfield [l]: each neuron may stay in two states, firing or quiescent (the ith neuron is represented by a spin variable ai which may take the values *l); the synaptic strength is assumed to be symmetric, i.e. the influence (&) of the ith neuron on the kth neuron is the same when i and k are exchanged (Ji,k = Jk,), and the input patterns are stored using the generalised Hebb rule for modifying the synaptic strength. An ‘energy’ function E[a] can be associated with each configuration {a} of the network and the time evolution of the neural network is such that the asymptotic stable states at large times are the minima of E[a] with respect to {a}. For simplicity let us say that the network remembers a given input pattern {a} if the asymptotic state is {a} or very near to {a} when the initial state of the network is equal to {a} (different and more restrictive definitions can be used); in other words E[a] must have a minimum near each of the input patterns which are remembered. The Hopfield model is also very interesting because it has many points in common with spin glasses and a very sophisticated and rich theory has been recently constructed for spin glasses [2]. Under the strong assumption that the input patterns are uncorrelated, both numeri- cal simulation [l, 31 and analytic computations [4] show that the storage capacity of such a network is proportional to N. If the number M of input patterns becomes larger than a critical value M, (MCcc0.l4N) the network goes into a state of total confusion and a negligible amount of patterns are remembered; in contrast, if M is smaller than M,, practically all input patterns are remembered. 0305-4470/86/ 100617 + 04$02.50 @ 1986 The Institute of Physics L617