IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 __________________________________________________________________________________________ Volume: 03 Special Issue: 03 | May-2014 | NCRIET-2014, Available @ http://www.ijret.org 152 ASSOCIATIVE MEMORY IMPLEMENTATION WITH ARTIFICIAL NEURAL NETWORKS Santosh Saraf 1 , A. M. Bhavikatti 2 1 Research scholar, Electronics and Communication Engineering, JJTU, Rajasthan, India 2 Professor, Electronics and Communication Engineering, BKIT, Bhalki, Karnataka, India Abstract The first description of ANN integrated circuit implements a continuous time analog circuit for AM. The design used a 22 x 22 matrix with 20,000 transistors, averaging 40 transistors per node to implement a Hopfield AM network. The design faced a scalability challenge at higher levels of integration. The paper advocates handling larger problems by a collection of smaller networks or hierarchical solutions, while predicting, “Significantly different connection technologies” as essential for success in larger systems. Keywords: Associative Memory (AM),CMOS (Complementary metal oxide semiconductors),Artificial Neural Network (ANN), Bayesian Memory Module (BMM), Field Programmable Gate Arrays (FPGA). ---------------------------------------------------------------------***--------------------------------------------------------------------- 1. INTRODUCTION Learning is the way we acquire knowledge about the world around us, and it is through this process of knowledge acquisition, that the environment alerts our behavioral responses. Learning allows us to store and retain knowledge, it builds our memories. In a neurobiological context, memory refers to the relatively enduring neural alterations induced by the interactions of an organism with its environment. Without such a change, there is no memory .The memory must be useful and accessible to the nerves system that influences the future behavior. Memory and learning reintricately connected. When a particular activity pattern is learned, it is stored in the brain, where it can be recalled later when required. Learning encodes the information. A system learns a pattern if the system encodes the pattern in its structure and it changes as the system learns the information. So learning involves change that can be represented in memory for future behavior. 2. RELATED WORKS The seminal work by Sage and Withers built AMs using discretetime analog technology for high-speed computation in combination with analog nonvolatile storage for synaptic weights. The network demonstrated was a 9x9 Hopfield [8] associative memory network. The issue with the design was that although the synaptic weights could dynamically adapt, there were only three possible states to the weights (1, 0, -1). Thus, the network could demonstrate learning for a very few specific computations only. The message from this study was that a continuous range weights would be a desirable feature for the synapses. In an attempt to achieve high resolution synaptic weights, Schwartz and Howard proposed representing each weight as a difference in voltage between two capacitors. With the additional circuitry for sense-amplifiers, a 32 x 32 matrix with 75,000 transistors averaged 70 transistors per neural node. The high-level integration required scaling of the components to nano-scale levels and further simplification of the node design. Holler[7] proposedto use floating gate technology for the representation of synaptic weights to achieve higher synapse density, but the design has electrically programmable static weights, and the dynamics of input presentation has no bearing on the real-time network associations. A mix of 8 x 8 matrix of digitally stored weights gate the inhibitory/excitatory pulse stream from 4 x 4 input layer. The pulse stream generation, integration and modulation results in much lower densities (140 transistors per neural node) than the aforementioned designs Among biological applications, Lyon et. al. [10], implemented an electronic analog equivalent for the human cochlea (inner- ear). The design uses CMOS transconductance amplifiers circuits, follower-integrator circuits and second-order filter circuits to emulate perceptron machines. The authors see inherent deficiencies with digital threshold logic and emphasize the need for high-density analog learning based implementations for more precise biological equivalence. Hammerstrom et al. [5] demonstrated one of the first custom digital ANN processor CNAPS. The CNAPS architecture, customized for ANN simulations, had significant performance vs. cost improvements over arrays of commercial microprocessors. The authors proposed that further speed-ups could be achieved by exploiting the high-speed memory structure and the inherent parallelism of field-programmable- gate-arrays (FPGAs). Along the lines of exploiting the FPGA