Neural Comput & Applic (1994)2:69--88 ~) 1994 Springer-Verlag London Limited Neural Computing & Applications An Adiabatic Neural Network for RBF Approximation B. Truyen, N. Langloh and J. Cornelis IRIS Research Group, Department of Electronics and Signal Processing (ETRO), Vrije Universiteit Brussel, Brussels, Belgium Numerous studies have addressed nonlinear func- tional approximation by multilayer perceptrons (MLPs) and RBF networks as a special case of the more general mapping problem. The performance of both these supervised network models intimately depends on the efficiency of their learning process. This paper presents an unsupervised recurrent neural network, based on the recurrent Mean Field Theory (MFT) network model that finds a least-squares approximation to an arbitrary L2 function, given a set of Gaussian radially symmetric basis functions (RBFs). Essential is the reformulation of RBF approximation as a problem of constrained optimis- ation. A new concept of adiabatic network organis- ation is introduced. Together with an adaptive mechanism of temperature control this allows the network to build a hierarchical multiresolution approximation with preservation of the global optimisation characteristics. A revised problem map- ping results in a position invariant local intercon- nectivity pattern, which makes the network attractive for electronic implementation. The dynamics and performance of the network are illustrated by numeri- cal simulation. Keywords: Adiabatic organisation; Multiresolution functional approximation; Nonorthogonal decompo- sition; RBF approximation; Recurrent neural net- works Original manuscript received 10 January 1994 Correspondence and offprint requests to" B. Truyen, IRIS Research Group, Department of Electronics and Signal Pro- cessing (ETRO), Vrije Universiteit Brussel, Pleinlaan 2, B-1050 Brussels, Belgium. I. Introduction Approximation of nonlinear functions by a super- position of Radial Basis Functions (RBFs) is a well- known technique for multidimensional interpolation [1-3]. The objective is to find a linear combination of RBFs that minimises a nonlinear differentiable function in the least-squares sense. RBF expansion allows approximations that are C k smooth for any desired degree k, provided appropriate nonlinear basis functions are selected. Moreover, this tech- nique yields full control of the local smoothness and approximation accuracy, which allows a favourable trade-off between good approximation accuracy and compact representation. Since the early days of neural network research, nonlinear functional approximation has been an application of major interest. In the past, emphasis was on MLPs (Multi Layer Perceptrons), where nonlinear function approximation was regarded as a particular instance of the mapping neural network problem [4]. Much of the supporting theory is related to the Kolmogorov theorem [5] and its refinement by Sprechner [6]. Unfortunately, the proof of this theorem is not constructive since it does not include any specification about the number nor the shape of the individual basis functions, and thus fails to guide the parameter determination. The lack of adequate design rules for choosing the network size exactly and the inexistence of dynamic adaptation schemes for the network size, is one of the most important limitations of MLPs. Hecht-Nielsen's [4] existence theorem demon- strates that MLPs with a single hidden layer composed of a finite number of units with sigmoidal activation functions are complete in the space of continuous functions with support in the unit cube, The issue of exact realisability for the case of MLP