A Pipeline Hardware Implementation for an Artificial Neural Network Denis F. Wolf, Gedson Faria, Roseli A. F. Romero, Eduardo Marques, Marco A. Teixeira, Alexandre A. L. Ribeiro, Leandro C. Fernandes, Jean M. Scatena, Rovilson Mezencio Instituto de Ciências Matemáticas e de Computação - Universidade de São Paulo Av. Trabalhador São-carlense, 400 13560-970 - São Carlos – SP - Brasil {denis, gedson, rafrance, emarques}@icmc.sc.usp.br Abstract: Artificial Neural Networks are computational devices inspired by the human brain for solving problems. Currently, they are being widely applied for solving problems in several areas such as: robotics, image processing, pattern recognition, etc.... The neural network model, Multilayer Perceptrons, is one of the most used due to its simple learning algorithm. However, its convergence is very slow. To take advantage of the massive parallelism inherent to this model, a hardware parallel implementation should be performed. There are different hardware parallel implementations for this particular model. This paper presents a reconfigurable hardware parallel implementation for Multilayer Perceptrons by using pipelines. Tests realized showed that the use of pipelines speeded up the execution time of the hardware parallel implementation. 1. Introduction The most general-purpose computers are based on the von-Neuman architecture, which is sequential in nature, on the other hand, artificial neural networks profit from massively parallel processing [Schonauer et al. 1998]. The Multilayer Perceptrons (MLPs) model is being widely applied successfully to solve difficult and diverse problems by training them in a supervised manner with a highly popular algorithm known as the error back-propagation algorithm [Haykin 1999]. An interesting method to explore this parallelism is using hardware implementations. The good performance and the natural parallel operation of the hardware devices, turn it as an attractive option to implement neural network algorithms. Today, the reconfigurable computing has been used as a very interesting technique to project and prototype hardware. Reconfigurable computing joins the hardware performance with the software flexibility [Gonçalves et al. 2000]. The hardware implementation of neural network has been already discussed in many articles, such as [Pérez-Uribe and Sanchez 1996], [Demian et al. 1996], and [Molz et al. 2000]. Some principles and perspectives of the digital neurohardware are discussed by Schonauer et al. [Schonauer et al. 1998] with some implementation examples. Suggestions of adaptations to hardware implementation of neural nets are presented by Moerland et al. [Moerland and Fiesler 1997]. According to Morenno [Moreno et al. 1999a][Moreno et al. 1999b], a reconfigurable device was proposed, which can be used to realize the most of arithmetic operations necessary to implement neural networks in hardware.