Research Article
The Relationship between Sparseness and Energy Consumption of
Neural Networks
Guanzheng Wang ,
1
Rubin Wang ,
1,2
Wanzeng Kong ,
2
and Jianhai Zhang
2
1
Institute for Cognitive Neurodynamics, School of Science, East China University of Science and Technology,
Meilong Road 130 Shanghai 200237, China
2
Key Laboratory of Brain Machine Collaborative Intelligence of Zhejiang Province, Hangzhou Dianzi University, Zhejiang, China
Correspondence should be addressed to Rubin Wang; rbwang@ecust.edu.cn
Received 3 August 2020; Accepted 29 September 2020; Published 25 November 2020
Academic Editor: J. Michael Wyss
Copyright © 2020 Guanzheng Wang et al. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.
About 50-80% of total energy is consumed by signaling in neural networks. A neural network consumes much energy if there are
many active neurons in the network. If there are few active neurons in a neural network, the network consumes very little energy.
The ratio of active neurons to all neurons of a neural network, that is, the sparseness, affects the energy consumption of a neural
network. Laughlin’s studies show that the sparseness of an energy-efficient code depends on the balance between signaling and
fixed costs. Laughlin did not give an exact ratio of signaling to fixed costs, nor did they give the ratio of active neurons to all
neurons in most energy-efficient neural networks. In this paper, we calculated the ratio of signaling costs to fixed costs by the
data from physiology experiments. The ratio of signaling costs to fixed costs is between 1.3 and 2.1. We calculated the ratio of
active neurons to all neurons in most energy-efficient neural networks. The ratio of active neurons to all neurons in neural
networks is between 0.3 and 0.4. Our results are consistent with the data from many relevant physiological experiments,
indicating that the model used in this paper may meet neural coding under real conditions. The calculation results of this paper
may be helpful to the study of neural coding.
1. Introduction
Recent studies have shown that single neuron firing is
sufficient to influence learning and behavior [1, 2]. The result
challenges people’s long-standing understanding that a
behavioral response needs the firing of thousands of neurons.
Their findings provide the basis and support for a neural
theory (neuron “sparse coding” hypothesis); the hypothesis
argues that a small number of neurons are enough to
encode information [3–5]. Only a small part of neurons
are activated when signaling in a sparse coding mode,
and most of the neurons are responsible only for network
connection [6–8]. Since a small number of neuron firing
and little energy are required in the sparse coding mode,
the sparse coding is an energy-efficient neural coding
method [9, 10]. This energy-efficient neural coding pattern
increases the ratio of neuron-encoded information and
greatly improves energy efficiency [11, 12]. Although the
sparse coding hypothesis of neural networks in the cere-
bral cortex has not yet been confirmed, it has been shown
that sparse coding represents the maximization of energy
efficiency [13–15].
Wang et al. studied the information carried by neurons
and the energy cost by neurons [13]. They found that neu-
rons are not most energy-efficient when coding the maxi-
mum information, and the ratio of signaling to fixed costs
affects the total energy consumed by neurons. Laughlin
studied the sparseness and representational capabilities of
neural networks [14]. They found that the sparseness of
the most energy-efficient coding pattern depends on the
ratio of signaling to fixed costs when neural networks have
similar representational capabilities. However, Wang et al.
and Laughlin did not consider the exact ratio of signaling
to fixed costs. Wang et al.’s study believes that the ratio of
signaling to fixed costs is between 10 and 200, and Laugh-
lin just studied three cases with a ratio of 1, 10, and 100.
Hindawi
Neural Plasticity
Volume 2020, Article ID 8848901, 13 pages
https://doi.org/10.1155/2020/8848901