Applied Soft Computing 18 (2014) 302–313
Contents lists available at ScienceDirect
Applied Soft Computing
j ourna l ho me page: www.elsevier.com/locate /asoc
A type-2 neural fuzzy system learned through type-1 fuzzy rules and
its FPGA-based hardware implementation
Chia-Feng Juang
∗
, Wen-Sheng Jang
Department of Electrical Engineering, National Chung Hsing University, Taichung 402, Taiwan, ROC
a r t i c l e i n f o
Article history:
Received 10 April 2013
Received in revised form 2 October 2013
Accepted 7 January 2014
Available online 23 January 2014
Keywords:
Type-2 fuzzy systems
Fuzzy neural networks
Neural fuzzy systems
Fuzzy chips
Fuzzy hardware
a b s t r a c t
This paper first proposes a type-2 neural fuzzy system (NFS) learned through its type-1 counterpart
(T2NFS-T1) and then implements the built IT2NFS-T1 in a field-programmable gate array (FPGA) chip. The
antecedent part of each fuzzy rule in the T2NFS-T1 uses interval type-2 fuzzy sets, while the consequent
part uses a Takagi-Sugeno-Kang (TSK) type with interval combination weights. The T2NFS-T1 uses a
simplified type-reduction operation to reduce system training time and hardware implementation cost.
Given a training data set, a TSK type-1 NFS is first learned through structure and parameter learning. The
built type-1 fuzzy logic system (FLS) is then extended to a type-2 FLS, where highly overlapped type-1
fuzzy sets are merged into interval type-2 fuzzy sets to reduce the total number of fuzzy sets. Finally,
the rule consequent and antecedent parameters in the T2NFS-T1 are tuned using a hybrid of the gradient
descent and rule-ordered recursive least square (RLS) algorithms. Simulation results and comparisons
with various type-1 and type-2 FLSs verify the effectiveness and efficiency of the T2NFS-T1 for system
modeling and prediction problems. A new hardware circuit using both parallel-processing and pipeline
techniques is proposed to implement the learned T2NFS-T1 in an FPGA chip. The T2NFS-T1 chip reduces
the hardware implementation cost in comparison to other type-2 fuzzy chips.
© 2014 Elsevier B.V. All rights reserved.
1. Introduction
The neural-fuzzy approach to data-based modeling and predic-
tion has drawn much research attention in the last two decades.
While many neural fuzzy systems (NFSs) have been proposed, most
studies have used type-1 fuzzy logic systems (FLSs) [1–4]. In recent
years, studies on the theory and applications of interval type-2
FLSs have become a research focus. Interval type-2 FLSs are exten-
sions of type-1 FLSs, where the membership value of an interval
type-2 fuzzy set (FS) is an interval type-1 FS. Several advantages
of using interval type-2 FLSs over their type-1 counterparts have
been reported [5–7]. However, the footprint of uncertainty and
operations with interval values in an interval type-2 FLS also lead
to greater complexity in computing system outputs and assigning
proper system parameters.
To automate the design of interval type-2 FLSs, several inter-
val type-2 NFSs have been proposed with claimed superiority over
the type-1 NFSs used for comparison [7–16]. Parameter learning
of interval type-2 FLSs using a gradient descent algorithm was
proposed in [7]. The approach does not use structure learning to
determine the number of rules and FSs. In other words, the struc-
ture is fixed and should be assigned in advance. Several studies
∗
Corresponding author. Tel.: +886 4 22840688x806; fax: +886 4 22851410.
E-mail address: cfjuang@dragon.nchu.edu.tw (C.-F. Juang).
on structure learning of interval type-2 FLSs have been proposed
[8–16]. These studies use the idea of clustering to generate type-
2 fuzzy rules. The general approach is using the maximum value
of centers of interval firing strengths as a rule generation criterion
for each incoming datum [9–11,13–16]. Because structure learn-
ing is easier when using type-1 fuzzy rules than type-2 fuzzy rules,
type-1 NFSs with structure learning ability have been extensively
studied. This paper proposes a new method that builds a type-2 NFS
via extending a built type-1 NFS (T2NFS-T1). For previous inter-
val type-2 NFSs [8–15], the type-reduction outputs are computed
using an iterative procedure, such as the Karnik–Mendel proce-
dure [5], which is computationally expensive. For this problem,
the T2NFS-T1 uses a simple weighted sum operation [17] to sim-
plify the type-reduction operation, which reduces both software
training time and hardware implementation cost. In learning, the
T2NFS-T1 can be used to convert well-trained type-1 NFSs learned
through different types of learning algorithms to type-2 NFSs with-
out regeneration of the type-2 rules from an empty set. This is
different from previous type-2 NFSs that generate type-2 fuzzy
rules from an empty set [9–16], which do not make good use of
the learning results in extensively studied type-1 NFSs. Given a
training data set, a TSK type-1 NFS is first learned through struc-
ture and parameter learning. The T2NFS-T1 is then initialized from
extending the built type-1 FLS to its type-2 counterpart. The gen-
eral approach to the learning of an interval type-2 FLS from a type-1
FLS is by extending all type-1 FSs to interval type-2 FSs. In this
1568-4946/$ – see front matter © 2014 Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.asoc.2014.01.006