Towards the Implementation of a Parallel Hardware Architecture for Spiking Neural Networks Marco Nuño-Maganda, Miguel Arias-Estrada, Cesar Torres-Huitzil National Institute for Astrophysics, Optics and Electronics (INAOE) Luis Enrique Erro No 1, Sta. María Tonantzintla, Puebla, C. P. 72000. nmaganda@inaoep.mx , ariasm@inaoep.mx , ctorres@inaoep.mx , Abstract Artificial Neural Networks are processing models widely explored due to the inherent parallelism. Recently, Spiking Neural Networks (SNNs) have been studied. These models have the advantage of reducing the bandwidth needed for interchanging information among the processing elements, due to the communication scheme based on digital spikes. Many hardware architectures for artificial neural networks have been proposed as an alternative to implementations based on personal computers. In this work, efficient hardware implementation of SNN is addressed. A research overview is presented and some preliminary results as well. 1. Introduction Artificial Neural Networks (ANNs) are parallel computational models comprised of densely interconnected, simple, adaptive processing units, characterized by an inherent propensity for storing experiential knowledge [1]. ANNs are widely used in a number of applications in which the Neural Networks are usually implemented as a software program on an ordinary digital computer. However, software implementations cannot utilize the essential property of parallelism found in biological Neural Networks. Spiking neurons differ from traditional connectionist models in the sense that the information is transmitted by means of pulses (or spikes), rather than by average firing rates, allowing spiking neurons to have richer dynamics and to exploit the temporal domain to encode or retrieve information in the exchanged spikes. 2. Motivation Designing hardware architectures for simulating artificial neural networks is a great challenge because the computational complexity, area greedy, non linear operators and highly dense interconnection shown by these models. The biologically inspired parallelism of ANNs, is lost when ANNs are implemented in modern digital computers as software programs, due to their sequential processing scheme. Exploring new alternatives with respect to neural networks models is a current research area because recently new neuronal models has been proposed, specifically Spiking Neural Models (SNMs), which represent an alternative to classical models. In [3], the computational power of SNMs has been tested, which is comparable with the classical models, but SNMs requires less hardware resources. SNMs can do the same processing with less processing elements, which is a great advantage when implementing SNN in hardware. It is feasible to explore all the parallelism desired by implementing hardware architectures using Spiking Neuron Models, allowing improving the performance of many SNN applications. For fully exploiting the potential of SNN it would be necessary to develop efficient hardware implementation techniques. In the hardware domain, the challenge is to follow the biological and mathematical trends. High-performance hardware architectures could be a very interesting research area, due to the high volume of data that demands a high computational capability of neural networks. In [4], a comparison of neural networks implementations in different hardware platform has been made, and only one of the tested hardware platforms has proven to have enough computational power for simulating SNNs. Thus, it is required further investigation to find ways to map efficiently SNN on efficient parallel architectures. For building high density systems restricted to speed/power consumption, it is necessary to investigate architectural innovations and signal representations to efficiently exploit the abilities of SNN in real world applications where efficient hardware solutions are needed.