The Clustering Algorithm for Nonlinear System Identification JOSÉ DE JESÚS RUBIO AVILA, ANDRÉS FERREYRA RAMÍREZ, CARLOS AVILÉS-CRUZ, IVAN VAZQUEZ-ALVAREZ Departamento de Electrónica, Area de Instrumentación Universidad Autónoma Metropolitana, Unidad Azcapotzalco. Av. San. Pablo 180 Col. Reynosa Tamaulipas. Azcapotzalco, 02200 México D. F. MÉXICO jrubio@correo.azc.uam.mx, fra@correo.azc.uam.mx, caviles@correo.azc.uam.mx , iva@correo.azc.uam.mx Abstract: - A new on-line clustering fuzzy neural network is proposed. In the algorithm, structure and parameter learning are updated at the same time. There is not difference between structure learning and parameter learning. It generates groups with a given radius. The center is updated in order to get that the center is near to the incoming data in each iteration, in this way, It does not need to generate a new rule in each iteration, i.e., it does not generate many rules and it does not need to prune the rules. Key-Words: Clustering algorithm, Fuzzy systems, Modeling, Identification. 1 Introduction Both neural networks and fuzzy logic are universal estimators, they can approximate any nonlinear function to any prescribed accuracy, provided that sufficient hidden neurons or fuzzy rules are available. Resent results show that the fusion procedure of these two different technologies seems to be very effective for nonlinear system identification [2]. In the last few years, the application of fuzzy neural networks to nonlinear system identification is very active area [9], [10]. Fuzzy modeling involves structure and parameters identification. The second one is usually (and easily) addressed by some gradient descent variant, e.g., the least square algorithm and backpropagation. Structure identification is to select fuzzy rules, it often lies on a substantial amount of heuristic observation to express proper strategy's knowledge. It often tackled by off-line, trial and error approaches, like the unbiasedness criterion [11]. Several approaches generate fuzzy rules from numerical data. One of the most common methods for structure initialization is uniform partitioning of each input variable into fuzzy sets, resulting to a fuzzy grid. This approach is followed in ANFIS [5]. In [1] the TKS was used for designing various neurofuzzy identifiers. This approach consists of two learning phases, structure learning which involves to find the main input variables of all the possible, specifying the membership functions, the partition of the input space and determining the number of fuzzy rules. Parameter learning involves the unknown parameters determination and the optimization of the ready existing ones in the model, using some optimization method based on the linguistic information from the human expert and form the numeric data obtained from the actual system to be modeled. These two learning phases are interrelated, and none of them can be independent from the other one. Traditionally, these phases are done secuencially, the parameter updating is employed after the structure is decided. It is suitable only for off-line operation. Most of structure identification methods are based on data clustering, such as fuzzy C-means clustering [14], [16], mountain clustering [10], substractive clustering [3]. This approach requires that all input output data are ready before we start to identify the plant. So the structure identification approaches are off-line. There are a few on-line methods in the literature. In [6] the input space is partitioned according to a aligned clustering-based algorithm. After the number of rules is decided, the parameters are tuned by recursive least square algorithm, it is called SONFIN. In [7] it is the recurrent case of the above case, it is called RSONFIN. In [15] the input space is automatically partitioned into fuzzy subsets by adaptive resonance theory mechanism. Fuzzy rules that tend to give high output error are split in two, by a specific fuzzy rule splitting procedure. In [8] he proposes that the radius to make clustering updates. In [19] they consider each group as a one rule, and each rule is trained by its group data, they give a time varying learning rate for backpropagation algorithm in order to prove the parameter learning error is stable. WSEAS TRANSACTIONS on COMPUTERS Jose De Jesus Rubio Avila, Andres Ferreyra Ramirez, Carlos Aviles-Cruz and Ivan Vazquez-Alvarez ISSN: 1109-2750 1179 Issue 8, Volume 7, August 2008