REVIEW Chaos breeds autonomy: connectionist design between bias and baby-sitting Cees van Leeuwen Received: 23 March 2007 / Revised: 20 September 2007 / Accepted: 21 September 2007 Ó Marta Olivetti Belardinelli and Springer-Verlag 2007 Abstract In connectionism and its offshoots, models acquire functionality through externally controlled learning schedules. This undermines the claim of these models to autonomy. Providing these models with intrinsic biases is not a solution, as it makes their function dependent on design assumptions. Between these two alternatives, there is room for approaches based on spontaneous self-organiza- tion. Structural reorganization in adaptation to spontaneous activity is a well-known phenomenon in neural develop- ment. It is proposed here as a way to prepare connectionist models for learning and enhance the autonomy of these models. Keywords Small world Non-linear dynamics Perception Spontaneous activity Complex systems Evolving and growing neural networks Cognitive modeling Connectionism and autonomy Cognitive science entered an important chapter of its his- tory when the connectionist program (Rumelhart and McClelland 1986) challenged the longstanding monopoly of the symbolic approach to cognitive modeling (Fodor 1975, 1981; Pylyshyn 1984). Connectionism offered an approach to cognition based on neural network modeling, a mathematical tool originating in the 40s based on work by Donald Hebb, Warren McCulloch and Walter Pitts, and Alan Turing. Connectionism has since evolved into an interdisciplinary project, taking in contributions from cognitive psychology, neuroscience, and branches of mathematics such as statistics, nonlinear dynamical sys- tems, and control theory. Neural network modeling, however, has remained its central tool. 1 C. van Leeuwen (&) Laboratory for Perceptual Dynamics, RIKEN BSI, 2-1 Hirosawa, Wako-shi, Saitama 351-0198, Japan e-mail: ceesvl@brain.riken.jp URL: http://pdl.brain.riken.jp/ 1 Neural networks consists of units (typically, but not necessarily, thought of as neurons) that are characterized by their activation values. These values change dynamically, as a function of activity received through network connections. In addition, a subset of units is able to receive external input; in another subset, the activity values of are read off as output. Functions are realized through these networks by feeding values to the input units, letting the activation values of the whole system evolve for some time, and then read off the values of the output units. The class of functions that could thus be realized is equivalent to that of symbolic computation (Siegelmann and Sontag 1991), making connectionism an equal contender in this respect. The connections that determine the movements of activity within a network can be adjusted in strength adaptively, in analogy to synaptic connections between neurons. In a classical Hebbian framework, these changes depend on the activity of the pre- as well as the postsynaptic neuron. This means that in the model, the weight of a connection from one unit in the network to another is adjusted in proportion to the product of both activations. This enables a network to learn associations between patterns of input activation, interme- diate internal states and, ultimately, output activity. Through adaptive weight adjustment, functions that in symbolic architectures had to be designed and programmed (Dreyfus 1992; Searle 1990) could be learned from experience. To date, classical Hebbian weight adjust- ment procedures have mostly been replaced by more elaborate methods. Some of these have factored the deviation from desired behavior into the weight adjustment rule. This has led to learning rules based on least-squares approximation techniques such as back- propagation (Rumelhart et al. 1986) or maximum likelihood estima- tion, as in the Boltzmann machine (Ackley et al. 1985). Others have advocated neurally more plausible mechanisms such as spike-timing dependent plasticity (Kistler and van Hemmen 2000; Song and Abbott 2001). In all these cases, however, the core concept is still the ability to learn through adaptive weight adjustment. 123 Cogn Process DOI 10.1007/s10339-007-0193-8