Efficient Implementation of Bidirectional Associative
Memories on the Extended Hypercube
J. Mohan Kumar*, and L.M. Patnaik*'**
'Microprocessor Applications Laboratory
*'Department of Computer Science and Automation &
Supercomputer Education and Research Center
Indian Institute of Science
BANGALORE 560 012 INDIA
lalit @yigyan.erneUn
Abstract
Bidirectional associative memories (BAMs) are being used extensively for solving a variety of
problems related to pattern recognition. The simulation of BAMs comprising of large number
of neurons involves intensive computation and communication. In this paper we discuss
implementation of bidirectional associative memories on various multiprocessor topologies.
Our studies reveal that BAMs can be implemented efficiently on the extended hypercube
toplogy since the performance of the extended hypercube is better than that of the binary
hypercube topology.
Keywords: Bidirectional Associative Memories, Hypercube, Extended Hypercube, Simula-
tion, Total Exchange.
I Introduction
One of the important features of artificial neural networks (ANNs) is the associative storage
and retrieval of knowledge [1]. BAM is an example of forward and reverse information flow
introduced in neural networks to produce two way associative search [2]. The BAM behaves
as a two-layer hierarchy of symmetrically connected neurons. Neurons of one layer recieve
weighted inputs from all the neurons of the other layer. Simulation of BAMs comprising of
large number of neurons is a compute-intensive task. In this paper we discuss simulation of
BAMs on multiprocessor topologies, in particular the extended hypercube topology.
Several reserchers have reported implementation of ANNs on multiprocessor topologies [3],
[4], [5]. However, in most of the reports [4], [5] implementation of backpropagation network
on multiprocessor systems is discussed. Simulation of BAMs involves extensive communica-
tion among the processor elements (PEs) of the multiprocessor system. BAM simulation
consists of determination of the weight matrix during the learning phase and computation of
outputs in either direction during the recall phase. In a multiprocessor implementation,
CH 3065-0/91/0000-2253 $1.00©IEEE