A New Online Unsupervised Learning Rule for the BSB Model Sylvain Chartier & Robert Proulx Laboratoire d’études en intelligence naturelle et artificielle Université du Québec à Montréal Département de Psychologie C.P. 8888 succ. Centre-ville Montréal, Qc, H3C 3P8,Canada. chartier@leina.uqam.ca proulx.robert@uqam.ca Abstract In this paper it is demonstrated that a new unsupervised learning rule enable non-linear model, like the BSB model and the Hopfield network, to learn online correlated stimuli. This rule stabilizes the weight matrix growth to the projection rule in a local fashion. The model has been tested with computer simulations that show that the model is stable over the variations of its free parameters and that it is noise tolerant in the recall task. 1. Introduction The usefulness of unsupervised learning algorithms in artificial neural networks lie in their ability to naturally implement adaptive categorization without the need of postulating an access to pre-existing information from outside the system. Moreover, in a recurrent architecture those models are also able to categorize new exemplars from previously learned categories. One example of such a model is the BSB neural network first introduced by Anderson et al [1]. This model, as any other neural network model, is completely specified by its architecture, its transmission rule and its learning rule. 2 The BSB neural network. The BSB architecture is illustrated at the figure 1. It can be seen that the connections are autoassociative, in other words, a given stimulus vector is associated with it self. Figure 1: Illustration of the architecture of the BSB. The transmission in this network is express by the following rule ] [ ] [ ] [ ] 1 [ t t t L Wx x x = , t = 1…T (1) Where x [t] is the state vector that represent the activity of the units in the network at time t, W, the weight matrix and L[], a piecewise-linear function that constrains the activity of the units in an hypercube. More formally, the piecewise- linear function is express by = - = - < = > = a a Else a a If a a If a L 1 1 1 , 1 ] [ (2) The recurrence means that the states of the input vector influence its further states. Since the activity is hard limiting, after a finite number of cycles the state vector will stay in an attractor, a corner of the hypercube.