Brain-inspired computing ISSN 1751-8601 Received on 1st October 2015 Revised on 14th November 2015 Accepted on 7th December 2015 doi: 10.1049/iet-cdt.2015.0171 www.ietdl.org Steve B. Furber ✉ School of Computer Science, The University of Manchester, Manchester M13 9PL, UK ✉ E-mail: steve.furber@manchester.ac.uk Abstract: The inner workings of the brain as a biological information processing system remain largely a mystery to science. Yet there is a growing interest in applying what is known about the brain to the design of novel computing systems, in part to explore hypotheses of brain function, but also to see if brain-inspired approaches can point to novel computational systems capable of circumventing the limitations of conventional approaches, particularly in the light of the slowing of the historical exponential progress resulting from Moore’s Law. Although there are, as yet, few compelling demonstrations of the advantages of such approaches in engineered systems, a number of large-scale platforms have been developed recently that promise to accelerate progress both in understanding the biology and in supporting engineering applications. SpiNNaker (Spiking Neural Network Architecture) is one such large-scale example, and much has been learnt in the design, development and commissioning of this machine that will inform future developments in this area. 1 Introduction There has recently been a significant increase worldwide in interest in, and funding for, research into brain function, exemplified by the European €1B ICT Flagship Human Brain Project and the US White House $300M BRAIN (Brain Research through Advancing Innovative Neurotechnologies) initiative. Why is there such a consensus that the time is right for new initiatives in what remains a very challenging frontier of science? Two complementary answers to this question suggest themselves: † Computer technology is now (and only now) approaching the capability required to contemplate constructing large-scale computer models of the brain. † At the same time, computer technology is approaching fundamental physical limits, motivating the quest for alternative approaches to the historic reliance on making transistors ever smaller. The brain is seen as a potential source of such alternative approaches to computation, but this is impeded by our partial understanding of how the brain works. Of course, alongside these advances in computer technology there have been advances in neuroscience and a growing appreciation of how to build computer and electronic models of neural systems. A greater understanding of the brain should also facilitate progress in the development of treatments for the many debilitating diseases of the brain, but this is not a new issue, and while it may provide further support for the various international initiatives it does not explain the timing of them. Likewise, the natural human desire to expand our understanding of ourselves motivates research in this area – what could be more fundamental to understanding humanity than understanding the organ that embodies our personalities and memories, and determines our every action? – but this motivation is long-standing and does not explain ‘why now?’. Research into the brain is, of course, not new. Neuroscientists have been engaged in the very demanding work of understanding the brain from the bottom up for more than a century, while psychologists have been pursuing a top-down approach to the problem for even longer, and some of the world’s great religions have been exploring consciousness and the nature of mind for millennia. More recently, brain-imaging machines have been added to the toolset. However, the brain spans many orders of magnitude in scale, and there is a very large gulf between the scales that are tractable from the bottom up, even with today’s advancing multi-electrode array technology, and those that can be resolved from the top down with imaging techniques. Somewhere in this gulf are the most important scales for understanding information processing in the brain – how is information represented, communicated, processed and stored? So far the only tools available to explore these intermediate scales are computer models, and computational neuroscientists have been exploring this space since the very earliest days of computers. Computational neuroscience has been able to take advantage of the exponential progress in the capabilities of computer technology, informed, of course, by progress in neuroscience and psychology. However, the scale of the problem is daunting even for today’s most advanced machines. Scale is important. There are many examples of artificial neural systems (that may or may not bear some relationship to biological brains) that depend critically on scale. The key concepts are deeply rooted in the counter-intuitive geometric properties of high-dimensional spaces and, if these models are scaled down to accommodate the limitations of the computers they run on, their functionality will be compromised, if not totally lost – an example of such a model is Kanerva’s sparse distributed memory [1]. Thus we have seen a growth in interest in the design and construction of specialised – brain-inspired – computer systems built both to explore the benefits of deploying our partial knowledge of brain function and to push back the boundaries that constrain computational neuroscience models on conventional machines. The key contributions of this paper are as follows: † a discussion of the major challenges impeding progress in computer technology (Section 2); † an introduction to the brain from a computer engineer’s perspective (Section 3); † metrics for comparing computers with brains (Section 4); † the major challenges in building brain-inspired machines (Section 5) and an overview of current large-scale projects building brain-inspired machines (Section 6); IET Computers & Digital Techniques Review Article IET Comput. Digit. Tech., pp. 1–7 1 This is an open access article published by the IET under the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/)