NATURE NEUROSCIENCE VOLUME 19 | NUMBER 3 | MARCH 2016 375
Since the first neural recordings by Adrian in 1926 (ref. 1), it has
become accepted wisdom that neurons communicate information
with their firing rates—that is, the number of spikes within a cer-
tain time window or across a population of neurons. Indeed, the
information encoded by neurons is still usually extracted by studying
their trial-averaged, time-varying firing rates. However, when neural
recordings in cortical and other brain areas found that the spike trains
of individual neurons are highly irregular, almost random, the firing
rate hypothesis ran into two serious conundrums (Fig. 1).
First, firing spikes at random times, as in a Poisson point process,
seems a particularly foolish idea when the goal is to convey information
in spike counts (Fig. 1a,b). When discrete spike counts are used to
represent continuous numbers, then the achievable precision is limited
solely by the unavoidable discretization. For a given number of spikes
M, the minimum error therefore scales with 1/M. However, because
of its unreliability, the error for a Poisson rate code scales with 1/ M .
In turn, neurons using a Poisson code need to fire a huge excess of
spikes to reach a given level of precision (Fig. 1c). Consequently, the
neural code chosen by the brain—firing rates to be inferred from
unreliable spike trains—seems incongruous. Should evolution not
have stumbled on a ‘better’ design?
Second, generating irregular spike trains within a network of recur-
rently connected neurons turns out to be a nontrivial problem. When
random spike trains are integrated on a dendritic tree, then the recipi-
ent neuron will generally produce a regular output spike train
2
. So
how can neurons ever produce irregular, Poisson-like spike trains?
Many solutions to the first problem, the ‘coding problem’, have been
proposed
3–7
, but mostly without specifying how the respective codes
can be generated in recurrent neural networks. A simple solution to
the second problem, the ‘implementation problem’, is to assume that
the excitatory and inhibitory inputs to each neuron are balanced
8–12
,
a theoretical proposal that has been largely corroborated by various
experimental observations
13–15
. However, while the theory of bal-
anced networks solves the problem of how to generate networks that
produce irregular spike trains and account for Poisson rate codes, it
sidesteps the question of why neural systems would represent infor-
mation so inefficiently. Indeed, implementing even simple functions
in these networks requires thousands of neurons.
Here we briefly review the literature on balanced networks and
then focus on several recent, theoretical studies that seek to reconcile
the apparent randomness of spike trains with an efficient population
code—that is, a code that scales with 1/M (refs. 16–21). These stud-
ies are based on networks in which the balance between excitatory
and inhibitory inputs is temporally much tighter than in the original,
‘loosely’ balanced networks. We discuss the various theoretical ben-
efits of these networks, which, besides the higher coding efficiency,
include separate recurrent loops for coding and computation, and the
ability to simultaneously represent almost as many variables as there
are neurons in the network. We furthermore review several experi-
mental studies that lend support to the notion of ‘tight’ balance. This
recent body of work suggests that the irregularity and unreliability
of spike trains at the single-neuron level coexist with a maximally
efficient code at the population level.
Loosely balanced networks
We illustrate the relationship between the balance of excitatory and
inhibitory input currents (E/I balance) and the variability of the
output spike train in Figure 1d,e. Here an integrate-and-fire neuron
is bombarded with noisy, Poisson-distributed spike trains from both
excitatory and inhibitory sources. If excitation dominates, temporal
averaging of the total input currents results in a mean positive drift of
the membrane potential toward threshold, causing a relatively regu-
lar output spike train despite the high level of input noise (Fig. 1d).
However, if excitation and inhibition cancel each other on a slower
time scale yet are uncorrelated on a faster time scale, then the net
input current will be dominated by these faster fluctuations, and the
membrane potential will follow an uncorrelated random walk toward
threshold, resulting in an output spike train with Poisson statistics
(Fig. 1e). We will refer to this type of balance as ‘loose’ balance,
1
Laboratoire de Neurosciences Cognitives, École Normale Supérieure,
Paris, France.
2
Champalimaud Centre for the Unknown, Lisbon, Portugal.
Correspondence should be addressed to S.D. (sophie.deneve@ens.fr) or C.K.M.
(christian.machens@neuro.fchampalimaud.org).
Received 3 November 2015; accepted 13 January 2016; published online
23 February 2016; doi:10.1038/nn.4243
Efficient codes and balanced networks
Sophie Denève
1
& Christian K Machens
2
Recent years have seen a growing interest in inhibitory interneurons and their circuits. A striking property of cortical inhibition
is how tightly it balances excitation. Inhibitory currents not only match excitatory currents on average, but track them on a
millisecond time scale, whether they are caused by external stimuli or spontaneous fluctuations. We review, together with
experimental evidence, recent theoretical approaches that investigate the advantages of such tight balance for coding and
computation. These studies suggest a possible revision of the dominant view that neurons represent information with firing rates
corrupted by Poisson noise. Instead, tight excitatory/inhibitory balance may be a signature of a highly cooperative code, orders of
magnitude more precise than a Poisson rate code. Moreover, tight balance may provide a template that allows cortical neurons to
construct high-dimensional population codes and learn complex functions of their inputs.
REVIEW
FOCUS ON NEURAL COMPUTATION AND THEORY
npg
© 2016 Nature America, Inc. All rights reserved.