MATH 364: DATA MINING, FINAL PROJECT, DECEMBER 2009 1 Hopfield Networks as Classifiers of Handwriting Nicholas Hooper and Heather Mattie Abstract—Hopfield networks are fully connected networks which contain multiple neurons, each of which is connected to the others via weights. A key feature of Hopfield networks is that they converge to a local minimum (a.k.a ”lowest energy state”). Partial or corrupted memories can be restored to their correct state. In addition, inputs that are sufficiently close to a stored memory will converge to the desired memory. The choice of weights is key in ensuring the highest rates of accuracy. Hebbian learning optimized via gradient descent seems to be the best choice to optimize the weights while preventing overfitting of the neural network. Index Terms—Handwriting recognition, Hopfield networks, neural networks, neural network applications I. I NTRODUCTION A RTIFICAL neural networks are mathematical constructs that model certain aspects of biological neurons. Neural networks consist of one or more neurons which can exhibit complex behavior due to the nature of the connections between neurons. Neural networks can serve to store ”memories” of data. These memories mimic biological systems in several ways. 1) They are associative. With associative memories, one can recall a memory through its association with other memories. In other words, if you associate banana with yellow, you can recall banana if you are given yellow and vice versa. 2) They are robust in handling error. In a neural network, memories can correct for a certain amount of error and still recall the correct memory. 3) Memories are distributed. In a neural network, each neuron plays a part in memory storage. This is opposed to traditional computing, where a memory is associated with a specific location on the storage device. Hopfield networks are a type of neural network named for John Joseph Hopfield (an American Scientist). They are fully connected networks which contain multiple neurons, each of which is connected to the others via weights. A key feature of Hopfield networks is that they converge to a local minimum (a.k.a ”lowest energy state”). This feature creates applications in two main areas: associative memories and optimization. With associative learning, the network can be trained to ”remember” states which can be recalled if it is given only part of the state. Within optimization problems, as the network converges to its local minima, the neural networks’ objective function can be optimized for a desired solution. II. BACKGROUND A. A Single Neuron We begin our discussion of Hopfield networks by discussing the structure of a single neuron. Our Hopfield network is con- structed of many single neurons which are linked. A neuron’s Fig. 1. A single neuron behavior is determined by three parts: the architecture, activity rule, and learning rule. The architecture describes the variables of a neuron and their relationship to on another. Each parameter (x i ) has a corresponding weight (w i ). The activity rule describes the dynamics of a neuron, i.e. how each neuron changes in response to the parameters. It has two steps. 1) First we determine the activation of the neurons a i = i w i x i = w T x (1) 2) The output y of a neuron is a function of the activation (f (a)). This is called the activity of the neuron. There are several possible activation functions, and the most popular include linear, logistic, hyperbolic tangent, and threshold functions. The learning rule determines the neuron’s weights (w), and its structure will depend on the type of neural network we are running. Often this involves the minimization of an objective, or error, function. B. Hopfield Networks In a neural network, the output from one neuron is the input into another. Their relationship falls into two categories: 1) feedforward networks and 2) feedback networks. In a feedforward network, the connections between the neurons do not form a closed cycle and move in only one direction. A Hopfield network forms a feedback network, which is any network that is not a feedforward network. Furthermore a Hopfield network is fully connected, in which each neuron is connected to each other neuron. A key feature of Hopfield networks is that their dynamics tend to settle to a number of stable states, or ”memories.”