IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 9, NO. 6, NOVEMBER 1998 1301 Neural Techniques for Combinatorial Optimization with Applications Kate Smith, Member, IEEE, Marimuthu Palaniswami, Senior Member, IEEE, and Mohan Krishnamoorthy Abstract— After more than a decade of research, there now exist several neural-network techniques for solving NP-hard com- binatorial optimization problems. Hopfield networks and self- organizing maps are the two main categories into which most of the approaches can be divided. Criticism of these approaches in- cludes the tendency of the Hopfield network to produce infeasible solutions, and the lack of generalizability of the self-organizing approaches (being only applicable to Euclidean problems). This paper proposes two new techniques which have overcome these pitfalls: a Hopfield network which enables feasibility of the solu- tions to be ensured and improved solution quality through escape from local minima, and a self-organizing neural network which generalizes to solve a broad class of combinatorial optimization problems. Two sample practical optimization problems from Australian industry are then used to test the performances of the neural techniques against more traditional heuristic solutions. Index Terms—Assembly line, combinatorial optimization, Hop- field networks, hub location, NP-hard, self-organization, sequenc- ing, traveling salesman problem. I. INTRODUCTION T HE idea of using neural networks to provide solutions to difficult NP-complete optimization problems has been pursued for over a decade. Hopfield and Tank’s seminal paper [18] in 1985 demonstrated that the traveling salesman problem (TSP) could be solved using a Hopfield neural network. Yet the technique, which requires minimization of an energy function containing several terms and parameters, was shown to often yield infeasible solutions to the TSP [38]. For the remainder of the decade, researchers tried to either modify the energy function [3], [37] or optimally tune the numerous parameters involved [19], [23] so that the network would converge to a feasible TSP solution. Subsequent efforts to confine the Hopfield network to the feasible constraint plane have resulted in a method which can now ensure the final solution is feasible [6], [13]. Despite this success, however, the reputation of the Hopfield network for solving combinatorial optimization problems does not appear to have been resurrected. Recent results have shown that, unless the TSP is Euclidean, the quality of the solutions found using a Hopfield network is unlikely to be comparable to those obtained using traditional techniques [14]. So while Manuscript received January 6, 1996; revised June 10, 1998. K. Smith is with the School of Business Systems, Monash University, Clayton, Victoria 3168, Australia. M. Palaniswami is with the Department of Electrical and Electronic Engineering, University of Melbourne, Parkville, Victoria 3052, Australia. M. Krishnamoorthy is with the CSIRO, Division of Mathematics and Statistics, Clayton, Victoria 3168, Australia. Publisher Item Identifier S 1045-9227(98)07352-4. the feasibility issue of Hopfield networks has been essentially eliminated, the question of solution quality still raises some doubts as to the suitability of the technique. Of concern here is the possibility that Hopfield networks are not being used to solve practical optimization problems which have arisen from industrial situations, simply because the literature appears to be focused on the deficiencies of the technique for solving the TSP. In recent work [33] we have argued that the TSP may not be an appropriate benchmark problem anyway, due to the existence of an alternative linear formulation which makes comparisons unfair and biases the findings against neural and other techniques using a nonlinear formulation. We do not advocate the application of a tech- nique which is known to yield inferior solutions. We are, however, observing that the performance of neural networks for solving practical optimization problems has been relatively untested. For many practical NP-complete problems, heuristic approaches are employed due to the need for rapid solutions. Obtaining the globally optimal solution is not as imperative as arriving at a near-optimal solution quickly. Certainly, one of the principal advantages of neural techniques is the rapid computation power and speed which can be obtained through hardware implementation, and this consideration is even more valuable in industrial situations. The relative scarcity of lit- erature comparing the performances of neural techniques to more traditional methods for practical optimization problems suggests that this advantage is not being realized. A similar focus on the TSP is found in the literature relating to the use of self-organizing approaches to optimization [2], [10], [12]. In this case, the reason is not simply because of the benchmark status of the TSP, but more because the vast majority of these approaches are based upon the elastic net method [8]. Kohonen’s principles of self-organization [21] are combined with the concept of an “elastic band” containing a circular ring of neurons which move in the Euclidean plane of the TSP cities, so that the “elastic band” eventually passes through all of the cities and represents the final TSP tour. Such approaches rely upon the fact that the “elastic band” can move in Euclidean space, and that physical distances between the neurons and the cities can be measured in the same space. Any self-organizing approach which uses the elastic net method as its basis will thus be greatly limited in its generalizability. Recently, we have proposed a new self-organizing approach to combinatorial optimization which generalizes to solve a broad class of “0–1” optimization problems [32]. This self- organizing neural network (SONN) is combinatorial in nature, operating within feasible permutation matrices rather than 1045–9227/98$10.00 1998 IEEE