2172 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 58, NO. 9, SEPTEMBER 2011
Design and Modeling of a Neuro-Inspired Learning
Circuit Using Nanotube-Based Memory Devices
Si-Yu Liao, Jean-Marie Retrouvey, Guillaume Agnus, Weisheng Zhao, Member, IEEE, Cristell Maneux,
Sébastien Frégonèse, Thomas Zimmer, Senior Member, IEEE, Djaafar Chabi, Arianna Filoramo, Vincent Derycke,
Christian Gamrat, and Jacques-Olivier Klein, Member, IEEE
Abstract—We present an original method to implement
neuro-inspired supervised learning for a synaptic array based
on carbon nanotube devices. The device characteristics required
to implement on chip learning within a crossbar of carbon nan-
otube field effect transistors (CNTFETs) as synaptic arrays were
experimentally demonstrated and accurately modeled through a
specific electrical compact model. We performed electrical simu-
lations of learning for an array of 24 nanotube memory devices
corresponding to a 3 input 3 output neural layer that revealed
successful learning of separable logic functions within very few
epochs, even when a realistic variability of nanotube diameter was
taken into account. Such a learning approach opens the way to
the use of high-density synaptic arrays as generic logic blocks in
configurable circuits.
Index Terms—Carbon nanotube transistors, compact model,
neural network, on-chip learning.
I. INTRODUCTION
C
URRENT progress in nanotechnology is paving the way
toward low cost and extremely high density device ar-
rays [1]. However, down-scaling of individual devices and in-
creased densities are achieved at higher defect rates. To over-
come this issue, defect tolerant system architectures are being
intensively investigated [2], [3]. In this context, neural networks
using nanoscale devices as synapses are promising candidates.
Numerous research groups working on memristive devices re-
cently claimed synaptic-like behavior and foresee the potential
use of such devices in neural applications [4]–[6]. Nevertheless,
Manuscript received May 3rd, 2010; revised September 29, 2010; accepted
January 10, 2011. Date of publication March 10, 2011; date of current version
September 14, 2011. This work was supported in part by the French National
Research Agency (ANR) under the project PANINI (Project ANR-07-ARFU-
008), and in part by the European project Nabab under Project FP7-216777.
This paper was recommended by Associate Editor B. Shi.
S. Frégonèse is with the Laboratory of the Integration from Material to
System, UMR 5218 Centre National de la Recherche Scientifique – Université
Bordeaux 1, 33405 Bordeaux, France.
S.-Y. Liao, C. Maneux, and T. Zimmer are with Electrical Characterization
and Compact Modeling Team, Nanoelectronics Group, Laboratory of the In-
tegration from Material to System, Université Bordeaux 1, 33405 Bordeaux,
France.
J.-M. Retrouvey, W. Zhao, D. Chabi and J.-O. Klein are with IEF, Univ. Paris-
Sud, UMR 8622, Orsay, F-91405, France, and also with CNRS, Orsay F-91405,
France (e-mail: jacques-olivier.klein@u-psud.fr).
G. Agnus, A. Filoramo and V. Derycke are with the CEA-Saclay, IRAMIS,
SPEC (URA 6424), LEM, 91191 Gif-sur-Yvette, France.
C. Gamrat is with the CEA, LIST, Embedded Computing Laboratory, CEA-
Saclay 91191 Gif-sur-Yvette, France.
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TCSI.2011.2112590
demonstrating only memristive behavior is far from sufficient
for building useful neural networks. In the emerging field of
neuro-inspired nanocircuits, the design and accurate modeling
of neuro-blocks based on state-of-the-art technologies is there-
fore a timely and pressing issue. In particular, a learning process
compatible with synaptic arrays and capable of implementing
functions is strongly desired. In addition, a very simple and
scalable synapse is required. Furthermore, the learning control
overhead has to be small and sharable to enable multiplexing,
otherwise any benefit gained in terms of density would be lost.
This approach is radically different from previous studies based
on carbon nanotubes (CNTs), e.g., [7] where each synapse re-
quires at least eight CNTFETs. In this light, we focus here on
the strong robustness offered by neural networks against vari-
ability in device characteristics, rather than their potential for
new applications (such as pattern recognition).
We consider an optically gated CNTFET (OG-CNTFET)
based technology for the realization of synapses. The properties
and operating modes of OG-CNTFETs have been described in
detail in earlier publications [8]–[10] and a crossbar topology
for the efficient addressing of such devices has recently been
proposed [11]. However, no strategy for implementing the
learning process has been developed until now. One of the main
goals of our work is to propose a method for on-chip learning
while keeping the synaptic connections as simple as possible
to achieve high densities. In this work, we first summarize
the key properties of OG-CNTFETs that are relevant for the
implementation of a learning process. Second, we describe a
new extension of our previously developed CNTFET compact
model [12], which now includes both the optical effect and the
programming mechanism. Third, we establish a learning rule
particularly well suited to the characteristics of our nanosy-
napses and propose a circuit topology for the implementation
of arrays of OG-CNTFETs with learning function capabilities.
As an example, we present electrical simulation results of the
learning step for a 3 input 3 output neural block composed
of 24 nanotube-based devices and we study the efficiency of
the learning process to mitigate the impact of device-to-device
variability.
II. OG-CNTFET TECHNOLOGY
A. Background
CNTFETs have very high charge sensitivity [13] even at room
temperature [14]. Using this sensitivity, nonvolatile memory de-
vices based on charge trapping in the SiO gate insulator have
been demonstrated [15], [16]. For the design of adaptive circuits,
1549-8328/$26.00 © 2011 IEEE