Opinion Artificial nervous systems— A new paradigm for artificial intelligence Fredric Narcross 1, * 1 Ben-Gurion University of the Negev PO Box 653, Be’er Sheva 84105, Israel *Correspondence: narcross@post.bgu.ac.il https://doi.org/10.1016/j.patter.2021.100265 Three dissimilar methodologies in the field of artificial intelligence (AI) appear to be following a common path toward biological authenticity. This trend could be expedited by using a common tool, artificial nervous sys- tems (ANS), for recreating the biology underpinning all three. ANS would then represent a new paradigm for AI with application to many related fields. In 1955 when John McCarthy organized the historic Dartmouth Summer Research Project, he coined the term ‘‘artificial intel- ligence’’ (AI) as a methodology-neutral phrase because the hoped-for attendees supported diverse methodologies, and each had ardent adherents. 1 Three of to- day’s AI methods yet stand out for their di- versity and adherents, but despite this, all three increasingly incorporate biological inspiration for performance improvement. The improvements drive further inclusion of biology in a positive reinforcement loop that is gradually bringing the diversity into a common biological framework. To be clear, the terms ‘‘biology’’ and ‘‘biolog- ical’’ refer to the animal kingdom’s ner- vous systems for which we study neuro- anatomy and neurophysiology; here, the terms do not refer to the study of plants, fungi, or sea sponges. One of the three methodologies, ma- chine learning (ML), is supported by vari- eties of networks, from neural networks (NN) to artificial neural networks (ANN) to recurrent neural networks (RNN) to convo- lutional neural networks (CNN) to general adversarial networks (GAN) to deep neural networks (DNN), and more. All of these have added to the success of ML and its progeny, deep learning (DL). These net- works, especially CNN, 2 have also incor- porated biological features beginning with the concept of neurons (nodes of the networks) and their synaptic plasticity (no- de’s weight) to connections between these ‘‘neurons’’ (both forward and back) and the layers with which they are orga- nized. According to IEEE Access, ‘‘How- ever, despite the recent progress in DL methodologies and their success in various fields, such as computer vision, speech technologies, natural language processing, medicine, and the like, it is obvious that current models are still unable to compete with biological intelligence. It is, therefore, natural to believe that the state of the art in this area can be further improved if bio-inspired concepts are inte- grated into deep learning models 3 .’’ A second methodology comes from Jeff Hawkins and his company, Numenta, who have been researching neuroscience and building computer models and algorithms to represent brain functions since 2004. Jeff says, ‘‘The key to AI has always been the representation’’ and in the last seven- teen years has continually expanded his representation, beginning with modeling individual neurons to modeling collections of cortical columns. Jeff’s approach is different and considerably more biological then any NN, which places his technology in a unique AI category. Additionally, Nu- menta’s Hierarchical Temporal Memory (HTM) technology is one of the only meth- odologies to represent nervous system temporal connectivity, a key neurophysio- logical feature often overlooked in other AI technologies. His trend is dedicated to biological realism and improvement therein. The third methodology, neuromorphic computing, is again a significantly different AI approach with promises of dramatically reducing the cost of intelligence process- ing; this is important considering the cost of training OpenAI’s GPT-3 Deep Learning Network was over US$12 million. Despite the use of parallel graphic processing units (GPU) for NNs, NN’s ‘‘neurons’’ run on traditional computing systems which pro- cess in a serial fashion, one line of code at a time; neuromorphic chips provide a hard- ware substrate that supports massive computing parallelism of its artificial neu- rons where every neuron operates under its own set of code independently, all the time. The savings and efficiency of paral- lelism are significant, and with thousands of companies investing in AI for big data an- alytics, customer service bots (natural lan- guage processing), and a myriad of other applications, the competition to provide the best service for the least cost pushes major companies in the hardware (neuro- morphic) research direction. This includes companies like Microsoft and IBM as well neuromorphic chip variations such as the Tensor Processing Unit (TPU) from Google and Loihi from Intel. Though neuromorphic computing hasn’t gained the celebrity sta- tus of state-of-the-art in ML, the allure of power savings through neuromorphic chips and their inclusion as a component in server farms keeps research in neuro- morphic processing moving forward. Additionally, the neuromorphic devel- opment trend supports increasing levels of biological realism. Over the last 30+ years neuromorphic chips have pro- gressed from modeling several handfuls of ‘‘neurons’’ to modeling hundreds of thousands of artificial spiking neurons and astrocytes. 4 It is noteworthy that the inclusion of astrocytes in any AI system is a significant step in the direction of bio- logical representation. Astrocytes sub- serve critical nervous system functions, outnumber neurons by a factor of 4:1, are presented as the gatekeeper of syn- aptic information transfer, 5 and are an indispensable partner of synaptic plas- ticity (memory). 6 What has been shown thus far is that 3 different AI approaches all use nervous ll OPEN ACCESS Patterns 2, June 11, 2021 ª 2021 The Author(s). 1 This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Please cite this article in press as: Narcross, Artificial nervous systems—A new paradigm for artificial intelligence, Patterns (2021), https://doi.org/ 10.1016/j.patter.2021.100265