Self-sustained activity in Attractor Networks using Neuromorphic VLSI Patrick Camilleri, Student member, IEEE, Massimiliano Giulioni, Maurizio Mattia, Jochen Braun, and Paolo Del Giudice Abstract— We describe and demonstrate the implementation of attractor neural network dynamics in analog VLSI chips [1]. The on-chip network is composed of an excitatory and an inhibitory population of recurrently connected linear integrate- and-fire neurons. Besides from the recurrent input these two populations receive external input in the form of spike trains from an Address-Event-Representation (AER) based system. External AER input stimulates the attractor network and provides also an adequate background activity for the on-chip populations. We use the mean-field approximation of a model attractor neural network to identify regions of parameter space allowing for attractor states, matching hardware constraints. Consistency between theoretical predictions and the observed collective behaviour of the network on chip is checked using the ‘effective transfer function’ (ETF) [2]. We demonstrate that the silicon network can support two equilibrium states of sustained firing activity that are attractors of the dynamics, and that external stimulation can provoke a transition from the lower to the higher state. I. I NTRODUCTION Neuromorphic chips, purporting to emulate the principles of information processing in the nervous system, have been largely devoted to duplicate in silicon the operation of sensory systems (such as retina [3] or cochlea [4]), and some- times to implement simple, general purpose computational elements supposedly at work in a variety of neural circuits (such as winner-take-all networks – [5][6]). In many in- stances, the chosen network architecture is either essentially feedforward [7], or it includes simple feedback mechanisms, as in winner-take-all or Central Pattern Generator (CPG) net- works [8]. In the present work we take a step towards silicon implementation of recurrent neural networks with massive feedback, exhibiting attractor behavior. Our main motivation is the belief that attractor networks should be considered as key building blocks of systems including, downstream a possibly neuromorphic sensory system involving complex processing stages, for example effecting a classification of the sensory input or accumulating information about it for a decision to be taken [9]. It has long been recognized that for recurrent networks with high levels of feedback the strength of synaptic connections can be chosen such that the network can store and retrieve prescribed patterns of collective ac- tivation as ‘memories’ [10] [11]. Given the initial state of the network, implementing an external stimulus, the network P. Camilleri and J. Braun are with the Department of Cognitive Biology, Otto-von-Guericke University, Leipziger Str. 44 / Haus 91, 39120 Magde- burg, Germany (email: patrick.camilleri@ovgu.de). M. Giulioni, M. Mattia, and P. Del Giudice are with the Department of Technologies and Health, Istituto Superiore di Sanit` a, V.le Regina Elena 299, 00161 Rome, Italy (email: massimiliano.giulioni@iss.infn.it). dynamics relax to the closest fixed point attractor (stored pattern), up to small fluctuations: the network works as an ‘associative memory’, retrieving a prototypical memorized representation for a whole class of stimuli which define the ‘basin of attraction’. If a stimulus is applied and then released, the attractor property of the stored patterns allows the network to sustain a persistent activity pattern which is selective for the stimulus (if it is close enough to a stored memory) and stable in its absence. The network behaves essentially as a bistable system, with two stationary states of low and elevated firing activity, to be associated with the ‘spontaneous’ activity state and a selective state triggered by the stimulus. The above properties make attractor networks of spiking neurons especially suited to provide a dynamic correlate of the persistent neural activity observed in cortex (for example, but not only, in infero-temporal cortex [12] and in prefrontal cortex [13]) in tasks requiring information about a stimulus to be held active in working memory after the stimulus has been removed, for later use in the task. Standard examples include Delayed Match-to-Sample (DMS) tasks [14], in which the subject is required to report if a briefly shown sample image is the same as a match image shown after a delay, or Pair Association tasks [15], in which one of two images shown after the delay has to be chosen, according to a prescribed correspondence to the one shown before the delay. Attractor models have been developed and improved to account for a wide array of experimental evidence related to working memory. Also, it is increasingly becoming clear that the dynamic scheme has a wider scope. Models based on bistable or multistable networks have been proposed as the- oretical underpinnings for understanding perceptual decision mechanisms and processes of information integration [16] [11], as well as multi-stable perception and binocular rivalry [17]. It so appears that attractor networks could be considered as general-purpose processing elements, worth the effort to implement them in silicon, in view of complex neuromorphic systems. In the present work we do not consider the unsu- pervised buildup of stimulus-driven synaptic modifications leading the network to support attractor states, but assign values to the synaptic efficacies such that the resulting neural dynamics exhibit attractor behavior, and check its match with theoretical predictions (though a specific form of Hebbian plasticity is implemented in the chip, and will be used to study the dynamic generation of attractor states in future work).