IMPLEMENTATION OF AN ARTIFICIAL NEURAL NETWORK EMPLOYING FIELD PROGRAMMABLE GATE ARRAY TECHNOLOGY Yousry El- Gamal, Magdy Saeb, Nadine El- Mekky Arab Academy for Science, Technology & Maritime Transport School of Engineering, Computer Department, Alexandria, Egypt Abstract: A hardware artificial neural network implementation employing field programmable gate array technology is presented. We propose a recurrent weight-updating algorithm with variable stability controlling factor to accelerate on chip learning. The digital circuit includes a status register that holds the address of the neuron that has fired. Thus, there is no need to retrain the circuit for a given problem. While full parallelism in the adder and multiplier circuits is not achieved, yet a remarkable performance of the order of few hundreds of nanoseconds was recorded. We used one-layer module, with input multiplexers, to replace multi-layer implementation. This has led to a considerable saving of precious chip area. The simplicity of the circuit obtained, with its inherently reliable operation, makes it a worthy candidate for larger massively parallel architectures. Keywords: Artificial Neural Networks (ANN), Multi-layer ANN (MNN), Recurrent ANN, FPGA, Parallel architecture. 1. INTRODUCTION Recent advances in artificial neural network systems (ANNs) have led to several applications such as image recognition, speech recognition, and pattern classification [3]. Although most practical applications of ANNs are carried out using software simulators, many other potential applications require large, high-speed networks implemented in efficient custom hardware, which can fully utilize the inherent parallelism embedded in neural network dynamics. Many designers and researches are developing VLSI implementations using various techniques, ranging from digital to analog and even optical [1]. The primary disadvantages of analog implementation are the inaccuracy of analog computations and low design flexibility even though they can possibly provide higher speed with low hardware cost [2]. On the other hand, digital ANN implementation can take advantage of some of the benefits of the state- of-the-art VLSI implementation techniques. The classical ANN implementation generally requires at least five functions such as in multi- layer neural network [1]. These are: 1. Weight storage; 2. Synaptic multiplication; 3. Summation of synaptic contribution; 4. Nonlinear activation function; 5. Transmission of input and output activities among neurons. 6. Network status storage. Two major problems are encountered in the implementation of ANN with digital architecture. These are the multipliers and the nonlinear characteristics of neurons, which require large circuits [3]. The most significant feature of neural networks is their learning ability. Size and real-time considerations show that on-chip learning is necessary for a large range of applications. In this work we propose a hardware artificial neural network implementation employing field programmable gate array technology. We employ a recurrent weight-updating algorithm with variable stability controlling factor to accelerate on chip learning [5]. The digital circuit includes a status register that holds the address of the neuron that has fired. Thus, there is no need to retrain the circuit for a given problem. While full parallelism in the adder and multiplier circuits has not been achieved, yet a remarkable performance of the order of hundreds of nanoseconds was recorded [2]. We used one-layer module, with input multiplexers, to replace multi-layer hardware implementation. This has allowed us to save precious chip area. The simplicity of the circuit obtained makes it a worthy candidate for future larger massively parallel architectures. In the next section, we discuss the operation of ANN. In section 3 the architecture of the proposed ANN is described. Section 4 demonstrates the learning phases while training our ANN; section 5 gives a summary of the results obtained. Finally, we give an overall summary and our conclusions. An added appendix partially shows the obtained simulation results.