A Modified Fuzzy ARTMAP Architecture for Incremental Learning Function Approximation azvan Andonie Department of Electronics and Computers Transylvania University of Bras ¸ov Romania email: andonie@deltanet.ro Lucian Sasu Department of Computer Science Transylvania University of Bras ¸ov Romania email: lmsasu@unitbv.ro Valeriu Beiu School of EE and CS Washington State University, Pullman USA email: vbeiu@eecs.wsu.edu ABSTRACT We will focus here on approximating functions that map from the vector-valued real domain to the vector-valued real range. A Fuzzy ARTMAP (FAM) architecture, called Fuzzy Artmap with Relevance factor (FAMR, defined in [1]) is considered here as an alternative to function approx- imation. FAMR uses a relevance factor assigned to each sample pair, proportional to the importance of the respec- tive pair during the learning phase, and is a generalization of PROBART (a FAM architecture defined in [2]). Like other FAM–based systems, FAMR can be incrementally trained. KEY WORDS Fuzzy ARTMAP, function approximation, incremental learning 1 Introduction A classical application of neural networks is the predic- tion of functions that are known only at a certain number of points. We will focus here on approximating functions that map from the vector-valued real domain to the vector- valued real range. Problems of this type occur almost ev- erywhere, e.g. in the prediction of economic or social data, in weather forecasting, in signal processing, in data mining, and so on. There are many curve-fitting methods, such as splines, that will do a wonderful job. Neural networks can be also added to the arsenal of robust data-fitters. Com- pared to splines, neural networks may look less competi- tive. However, in some applications neural network mod- els are proved to be a very good alternative. One of these applications is incremental function approximation. In the context of supervised training, incremental learning means learning each input-output sample pair, without keeping it for subsequent processing. An incre- mental learning function approximator should be able to adapt to new information (i.e., the function value for one point) without corrupting or forgetting previously learned information– the so-called stability-plasticity dilemma ad- dressed by Carpenter and Grossberg [3]. Incremental learn- ing function approximators are very attractive in data min- ing applications with large data sets. The FAM family of neural networks, having the roots in Carpenter’s et al. seminal paper [4] is known to be one of the few models with incremental learning capability. The FAM maps subsets of R m to R n , accepting both binary and analogue inputs in the form of pattern pairs. Carpenter et al. [4] have tested the function approximation capability of FAM on a continuous sinusoidal function[4]. The approxi- mation is incrementally improved as each new data point is added. The asymptotic accuracy of the approximation can be controlled. The FAM paradigm is prolific and there are many variations the initial model: ART-EMAP[5], dARTMAP[6], Boosted ARTMAP[7], Fuzzy ARTVar[8], Gaussian ARTMAP[9], PROBART [2], PFAM[10, 11], μARTMAP[12]. Potentially, all can be used for function approximation. We will refer here to the function approxi- mation results obtained using the original FAM, and PRO- BART. FAMR is a FAM architecture defined [1] as an incre- mental learning system for general classification and esti- mation of the probability that an input belongs to a given class. Each training pair has a relevance factor assigned to it. This factor is proportional to the importance of the respective pair in the learning process. Using a relevance factor adds more flexibility to the training phase, allow- ing ranking of sample pairs according to the confidence we have in the information source. The training sequence may include sample pairs from sources with different levels of noise. We aim to analyze here the FAMR capability for func- tion approximation tasks and compare our results to the performances using the original FAM and PROBART. Section 2 of this paper will review the necessary FAM and PROBART details. In Section 3, we present the FAMR model. Section 4 presents the experimental results, fol-