Research Article
FPGA-Based Stochastic Echo State Networks for
Time-Series Forecasting
Miquel L. Alomar, Vincent Canals, Nicolas Perez-Mora,
Víctor Martínez-Moll, and Josep L. Rosselló
Physics Department, University of the Balearic Islands, 07122 Palma de Mallorca, Spain
Correspondence should be addressed to Josep L. Rossell´ o; j.rossello@uib.es
Received 2 August 2015; Revised 8 October 2015; Accepted 15 October 2015
Academic Editor: Mikhail A. Lebedev
Copyright © 2016 Miquel L. Alomar et al. Tis is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
Hardware implementation of artifcial neural networks (ANNs) allows exploiting the inherent parallelism of these systems.
Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing
(RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work,
we show a new approach to implement RC systems with digital gates. Te proposed method is based on the use of probabilistic
computing concepts to reduce the hardware required to implement diferent arithmetic operations. Te result is the development of
a highly functional system with low hardware resources. Te presented methodology is applied to chaotic time-series forecasting.
1. Introduction
Introduction to Reservoir Computing. Recurrent neural net-
works (RNNs) [1] are a class of artifcial neural networks
(ANNs) characterized by the existence of closed loops. RNNs
are inspired by the way the brain processes information
generating dynamic patterns of neuronal activity excited by
input sensory signals [2]. Reservoir Computing (RC) [3–10]
is a recently introduced efcient technique for implementing
and confguring recurrent neural networks (RNNs). It is
well suited for applications that require processing time
dependent signals such as temporal pattern classifcation
and time-series prediction [5]. In an RC system, all the
interconnection weights of the RNN are kept fxed and only
an output layer is confgurable as illustrated in Figure 1. In
recent years, RNNs have been extensively used to successfully
solve computationally hard problems [11–15]. Nevertheless,
the complex training procedure of RNNs is very time-
consuming. On the other hand, RC presents an easy training
procedure, which can be performed, in practice, via a simple
linear regression.
Te RC architecture is composed of three parts: an input
layer, the reservoir, and an output layer (see Figure 1). Te
input layer feeds the input signals u() = (
1
(), . . . ,
())
to the reservoir with fxed random weight connections
W
in
. Te reservoir consists of a relatively large num-
ber of randomly interconnected neurons () with states
x() = (
1
(), . . . ,
()) and internal weights W. Under
the infuence of input signals, the network exhibits transient
responses which are read out at the output layer y() =
(
1
(), . . . ,
()) by means of a linear weighted sum of
the individual node states. As the only part of the system
which is trained (assessment of the output weights W
out
) is
the output layer, the training does not afect the dynamics
of the reservoir itself unless a recurrence exists between
the reservoir and the readout (recurrence weights given by
W
back
).
Te general expression to estimate the neuron states is
given by
x ( + 1) = f (W
in
u ( + 1) + Wx () + W
back
y ()), (1)
where f = (
1
,...,
) are the neuron transfer functions
(typically sigmoidal). In the simplest case of avoiding direct
connections between the input and the output layer as well as
Hindawi Publishing Corporation
Computational Intelligence and Neuroscience
Volume 2016, Article ID 3917892, 14 pages
http://dx.doi.org/10.1155/2016/3917892