Critical Branching Neural Computation Christopher T. Kello and Marshall R. Mayberry Abstract—Liquid state machines have been engineered so that their dynamics hover near the “edge of chaos” [1], [2], where memory and representational capacity of the liquid were shown to be optimized. Previous work found the critical line between ordered and chaotic dynamics for threshold gates by using an analytic method similar to finding Lyapunov exponents [3]. In the present study, a self-tuning algorithm is developed for use with leaky integrate-and-fire (LIF) neurons that adjusts postsynaptic weights to a critical branching point between subcritical and supercritical spiking dynamics. The tuning algorithm stabilizes spiking activity in the sense that spikes propagate through the network without multiplying to the point of wildfire activity, and without dying out so quickly that information cannot be transmitted and processed. The critical branching point is also found to maximize memory and representational capacity of the network when used as liquid state machine. I. I NTRODUCTION Nervous systems tend to be characterized by recurrent loops across a wide range of spatial and temporal scales [4]. In particular, if one traces the branching of synaptic connec- tions projecting out from a given starting neuron, numerous branches can be found to recurrently connect back to the starting neuron. These recurrent loops may consist of a wide range of intervening numbers of neurons, and intervening neurons may range from spatially proximal to distal with respect to the starting neuron. Spiking dynamics are thresholded and thus inherently non- linear. When spiking dynamics are instantiated in recurrent loops of various scales, the resulting collective activity is often associated with chaotic dynamics [5], and considered to be complex in this regard. Model systems have been proven to be chaotic [6], and real nervous systems have been observed to exhibit signatures of chaotic dynamics, e.g., in terms of Lyapunov exponents [7], and entropic measures of collective neural activity [8]. Evidence for near-chaotic neural dynamics has led re- searchers to consider whether this property of complexity might be important for neural information transmission and processing [9], rather than just a by-product of nonlinearities and recurrent loops. One possibility is that near-chaotic dynamics are essential to producing metastable, responsive spiking activity [10]. Neural networks at all scales (from microcircuits to subcortical and cortical structures to whole brains) must be sensitive to external inputs, where external inputs are spikes originating from outside the network in question. If incoming spikes are unable to create or perturb spiking activity in the network, then information would not be transmitted or processed. However, networks can also be overly sensitive to perturbations, in which case incoming spikes may spread like wildfire through the system, again diminishing the ability of incoming spikes to perturb ongoing activity. A. Critical Branching A balance can be struck between unresponsive and overly responsive spiking by relating their dynamics to a critical branching process [11]. Critical branching processes describe sequences of discrete choice events, where each “ancestor” event may cause some number of subsequent “descendant” events, which in turn lead to further event branching. The expected number of descendant events is described by σ: the ratio of the number of descendant events over the number of ancestor events. Spikes are the events in our case, and external inputs can be described as ancestor spikes that cause descendant spikes in a given neural system. If σ< 1, then the number of spikes will diminish over time, and information will not be transmitted throughout the system in terms of propagating spiking activity. If σ> 1, then the number of spikes will grow over time and eventually come to saturate the network. Critical branching occurs at σ =1, the point at which spikes are conserved over time, and can thus propagate throughout the system without dying out or running rampant. Electrophysiological recordings of neural activity – both in vitro [12], [13] and in vivo [15] – have provided evidence for critical-branching dynamics. The evidence comes in the form of so-called “neural avalanches” that describe distributions in bursts of neural activity. In slice preparations of rat somatosensory cortex, local field potentials were recorded to measure spontaneous neural activity. This activity was found to occur in bursts that spread over the tissue, and burst sizes were measured in terms of the amount of activated tissue for each time interval, summed over consecutive time intervals for detected activity. An analogous method was used to measure bursts from human electroencephalogram (EEG) recordings during the resting state. In both cases, burst sizes were found to be power-law distributed with an exponent of -3/2 (see Fig 1). Probabilistic models of critical branching have been shown to simulate power-law distributions of recorded neural activ- ity [Fig 1; [11]], but due to their probabilistic nature, they do not have memory or representational capacity. Specifi- cally, previous models did not include terms for simulat- ing membrane potentials, action potentials, and weights on synaptic connections. Thus, they cannot serve as mechanis- tic models of neural computation. To our knowledge, one critical-branching model has been reported that uses linear integrate-and-fire neurons and all-to-all connectivity among neurons [16], but the models memory and representational capacities were not tested. The authors also describe an algorithm for tuning synaptic connection weights to the