IJCSNS International Journal of Computer Science and Network Security, VOL.9 No.6, June, 2009 73 Manuscript received June 5, 2009 Manuscript revised June 20, 2009 Hybrid Hopfield Neural Network, Discrete Wavelet Transform and Huffman Coding for Image Recognition Kussay Nugamesh Mutter † Zubir Mat Jafri †† Azlan Bin Abdul Aziz ††† , School of Physics, Universiti Sains Malaysia, Malaysia Summary This work presents a new solution to overcome the obstacle of using Hopfield Neural Network (HNN) with high level images than binary images. While HNN deals with bipolar system for direct input data, still it is not useful for gray- level or color images. Supposing for 8-bit gray-level image consists of 8-layers of bitplanes can be represented as bipolar data. Hence, it is possible to express each bitplane as single image. In this way HNN able to operate on gray level images with good results. However, storing huge data takes large space of storage. Therefore, Discrete Wavelet Transform (DWT) and Huffman Coding will be used in a hybridization system with HNN for reducing the large amount of data. This can be achieved by converting the eight states of bipolar weights for a minimum size of 3- pixels vector into decimal representation to be ready for DWT and Huffman. In converging, the compressed weights will restore and reconverted into bipolar form. Experimental results showed the perfect performing of HNN for gray and color images recognition. This system tested on a large number of different samples of gray level images. Key words: Hopfield Neural Network, Discrete Wavelet Transform, Huffman Coding 1. Introduction Inspired by the structure of the human brain, artificial neural networks have been widely applied to fields such as pattern recognition, optimization, coding, control, etc., because of their ability to solve cumbersome or intractable problems by learning directly from data, [1,2,3]. One of the important types of neural networks is Hopfield network, which is iterative Auto-Associative networks consisting of a single layer of fully connected processing elements. An expanded form of a common representation of the HNN is shown in figure (1). All the processing elements (Y1,Y2,…,Yn : neurons or nodes) are connected in feedback architecture with the connection weights specified in a certain way,[3]. If HNN is given by weights and limiting values, then the network will be in dynamic equilibrium when creates a pattern. A network can define various patterns; and can find them by different start vectors in the iteration. Corresponding to the spin-glass theory of solid state physics, such equilibrium functions in Hopfield networks are characterized by the fact, that the total energy (Hamilton function) becomes minimum. This leads here to a “Lyapunov function or energy function”, equation (1), which becomes exactly minimum, when creates a pattern. Taking minimum size of pattern (vector) of 3-pixels (elements) rather than the whole image will lead to have only eight possible states for producing weights of learning, table (1). Therefore, multi-path architecture network has to be used here, as shown in figure (2), [4]. n 1 i n 1 j j i ij v v w 2 1 E (1) Where: E: energy function, w ij : the weight from the output neuron i to the input neuron j, and v: input vector. In table(1) the vector property is the sign of the sum of each vector state used here to create right pattern in HNN, [4,5]. Fig. 1. Hopfield Neural Network Architecture Wij: weights Y1,Y2,…,Yn : neurons