Neural Processing Letters 10: 211–222, 1999.
© 1999 Kluwer Academic Publishers. Printed in the Netherlands.
211
A New Relaxation Procedure in the Hopfield
Network for Solving Optimization Problems
XINCHUAN ZENG and TONY MARTINEZ
Computer Science Dept., Brigham Young University, 3366 TMCB, 84602 Provo, UT, U.S.A.,
e-mail: martinez@cs.byu.edu
Abstract. When solving an optimization problem with a Hopfield network, a solution is obtained
after the network is relaxed to an equilibrium state. The relaxation process is an important step in
achieving a solution. In this paper, a new procedure for the relaxation process is proposed. In the
new procedure, the amplified signal received by a neuron from other neurons is treated as the target
value for its activation (output) value. The activation of a neuron is updated directly based on the
difference between its current activation and the received target value, without using the updating
of the input value as an intermediate step. A relaxation rate is applied to control the updating scale
for a smooth relaxation process. The new procedure is evaluated and compared with the original
procedure in the Hopfield network through simulations based on 200 randomly generated instances
of the 10-city traveling salesman problem. The new procedure reduces the error rate by 34.6% and
increases the percentage of valid tours by 194.6% as compared with the original procedure.
Key words: constraint satisfaction, Hopfield network, neural networks, optimization, relaxation
procedure
1. Introduction
Hopfield and Tank [1] proposed an approach of using a neural network to find a
suboptimal solution of the traveling salesman problem (TSP). In their approach, a
TSP instance is represented by an energy function including the cost and constraint
terms that reflect the objective of a solution. The objective of the constraint term
is to find a valid tour, which requires that each city must be visited once and only
once. The objective of the cost term is to find the shortest valid tour. The energy
function can be implemented by a neural network. For an N -city TSP problem,
the network consists of N × N neurons and the links connecting the neurons. The
weights of the links are encoded to represent the cost and the constraints for the
given TSP instance to be solved. To reach a solution, the parameters in the energy
function and the initial values of the neurons need to be properly chosen, and then
the network is relaxed from the initial state. During the relaxation, each neuron
updates its input value based on the information received from other neurons. A
solution is obtained when the network reaches an equilibrium state. They showed
that a neural network configured with symmetric weights always converges to a