Anti-Hebbian synapses as a linear equation solver zy Kechen Zhang* Giorgio Ganis* Martin I. Sereno Department of Cognitive Science University of California, San Diego La Jolla, California 92093-0515 zyxw { kzhang, ganis, sereno}@cogsci.ucsd.edu Abstract. It is well-known that Hebbian synapses, with appropriate weight normal- ization, extract the first principal component of the input patterns (Oja 1982). Anti-Hebb rules have been used in combination with Hebb rules to extract additional principal components or generate sparse codes (e.g., Rubner and Schulten 1990; FoldiAk 1990). Here we show that the simple anti-Hebbian synapses alone can support an important computational function: solving simultane- ous linear equations. During repetitive learn- ing with a simple anti-Hebb rule, the zyxwvuts weights onto an output unit always converge to the exact solution of the linear equations whose coefficients correspond to the input patterns and whose constant terms correspond to the biases, provided that the solution exists. If there are more equations than unknowns and no solution exists, the weights approach the values obtained by using the Moore-Penrose generalized inverse (pseudoinverse). No ex- plicit matrix inversion is involved and there is no need to normalize weights. Mathemati- cally, the anti-Hebb rule may be regarded as an iterative algorithm for learning a special case of the linear associative mapping (Koho- ‘These authors have made equal contributions to this work. KZ’s present address: Computational Neurobiology Laboratory, The Salk Institute, La Jolla, California 92037. nen 1989; Oja 1979). Since solving systems of linear equations is a very basic computa- tional problem to which many other prob- lems are often reduced., our interpretation suggests a potentially general computational role for the anti-Hebbiam synapses and a cer- tain type of long-term depression (LTD). Suppose we have n input variables al, . . . , a,, and a linea-r unit whose output y is the weighted sum zyxw n zyxwv i= 1 where variable b is the lbias, or an additional input variable with constant weight -1. The weights are modified according to the simple anti-Hebb rule where the learning rate E > 0 is a small con- stant. The weight increments of the anti- Hebb rule presented here become identical to those of the Widrow-Hoff delta rule for supervised learning if the bias is equal to the “desired output”. Nonetheless, the ac- tual output of the linear unit has different values in the two cases whenever the bias is nonzero. If the weight for the bias term 0-7803-4122-8/97 $10.0001997 IEEE 387