2744 IEEE TRANSACTIONS ON ELECTRON DEVICES, VOL. 57, NO. 10, OCTOBER 2010
An Artificial Neural Network at Device Level Using
Simplified Architecture and Thin-Film Transistors
Tomohiro Kasakawa, Hiroki Tabata, Ryo Onodera, Hiroki Kojima, Mutsumi Kimura, Member, IEEE,
Hiroyuki Hara, and Satoshi Inoue, Senior Member, IEEE
Abstract—We show a neural network at the device level that
uses a simplified architecture and thin-film transistors (TFTs).
First, we form a neuron unit from eight transistors and reduce
the synapse unit to only one transistor by employing characteristic
variations of the synapse transistors to adjust the connection
strength. Second, we compose a “local interconnective neural
network” that is optimal for integrated circuits, in which we
connect each neuron to four neighboring neurons through pairs of
synapses: A “cooperatory synapse” and an “oppository synapse.”
Third, we fabricate the neural network using thin-film technology,
which is expected to be widely used for giant microelectronics. Al-
though the device architecture is quite different from conventional
systems, the neural network is confirmed to unsupervisedly learn
any logic, such as OR and XOR, which is not linearly separable
and is a standard logic used to test the performance of a neural
network. Using this simplified architecture and TFTs, a large-
scale neural network comparable with the human brain may be
integrated.
Index Terms—Artificial neural network (ANN), characteristic
variation, neuron, synapse, thin-film transistors (TFTs), unsuper-
vised learning.
I. I NTRODUCTION
A
RTIFICIAL neutral networks (ANN) are thinking systems
that imitate biological neural networks in the human
brain. These systems are promising novel concepts for infor-
Manuscript received February 24, 2010; accepted June 21, 2010. Date of
publication August 12, 2010; date of current version September 22, 2010. This
work was supported in part by collaborative research with Seiko Epson, in part
by a research project of the Joint Research Center for Science and Technology
at Ryukoku University, in part by a grant from the High-Tech Research Center
Program for private universities from the Ministry of Education, Culture,
Sports, Science and Technology (MEXT), in part by a grant for research facility
equipment for private universities from the MEXT, and in part by a grant
for special research facilities from the Faculty of Science and Technology
of Ryukoku University. The review of this paper was arranged by Editor
J. Kanicki.
T. Kasakawa was with the Department of Electronics and Informatics,
Ryukoku University, Otsu 520-2194, Japan. He is now with Seiko Epson
Corporation, Nagano 399-0293, Japan.
H. Tabata and R. Onodera were with the Department of Electronics and
Informatics, Ryukoku University, Otsu 520-2194, Japan. They are now with
Nara Institute of Science and Technology, Nara 630-0192, Japan.
H. Kojima was with the Department of Electronics and Informatics, Ryukoku
University, Otsu 520-2194, Japan.
M. Kimura is with the Department of Electronics and Informatics and the
Innovative Materials and Processing Research Center, Ryukoku University,
Otsu 520-2194, Japan (e-mail: mutsu@rins.ryukoku.ac.jp).
H. Hara and S. Inoue are with the Frontier Device Research Center,
Seiko Epson Corporation, Nagano 399-0293, Japan (e-mail: hara.hiroyuki@
exc.epson.co.jp; inoue.satoshi@exc.epson.co.jp).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TED.2010.2056991
mation processing and have many unique advantages such as
self-learning, self-organization, parallel distributed computing,
fault tolerance, etc. [1]–[4]. These functions are obtained by
connecting a large number of neurons with synapses to mimic
the human brain, which has more than 10
11
neurons. Super-
vised learning or unsupervised learning is executed using the
plasticity of the connection strength of the synapses intention-
ally or unintentionally. First, neural networks at the software
level were studied using high-level programming languages
or logical simulators to investigate fundamental theories and
ideal models. Although the theories and models were generally
formularized, real objects were required to verify them by the
practical operations. Therefore, neural networks at the circuit
level that use complicated analog and digital circuits which
have often been called neural network large-scale integrations
(LSIs), were developed to realize actual applications [5]–[8].
However, these circuits were usually too intricate to integrate a
large number of neurons into a neural network. For example, in
these circuits, both a neuron unit and a synapse unit consisted
of tens of transistors to guarantee accurate functions. Moreover,
a neuron was connected to many other neurons to imitate
biological neuronal networks precisely, as each neuron in the
human brain has approximately 10
4
synapses.
On the other hand, thin-film technology, which is expected
to be widely used for giant microelectronics, allows a wide
variety of advanced devices to be fabricated on large sub-
strates in a stacked structure at a low cost [9]. Therefore,
thin-film transistors (TFTs) are applied to not only flat-panel
displays such as liquid crystal displays [10], organic light-
emitting diode displays, [11] and electronic papers [12], but
also photosensing devices such as ambient light sensors [13],
image scanners, [14] and artificial retinas [15]. Moreover, they
are promising for general electronics, including some types
of information processing [16]. However, one profound dis-
advantage of TFTs is that characteristic degradations easily
occur.
Herein we propose a neural network at the device level
using a simplified architecture and TFTs. We fabricated the
neural network using a leading candidate in giant microelec-
tronics, poly-Si TFTs, in which a poly-Si film was deposited
and crystallized by excimer laser irradiation, and insulator
films and metal films were deposited on a large glass sub-
strate [17]. We used the characteristic degradations of TFTs,
which are usually regarded as shortcomings, to generate the
synaptic plasticity. In this paper, we describe this neural net-
work and the experimental results of unsupervised learning in
detail.
0018-9383/$26.00 © 2010 IEEE