Multi Library Wavelet Function and the Least Trimmed Square method for constructing an optimizing Beta Wavelet Neural Networks Maher Jbeli 1 , Abdallah Almahirah 1 , Abdesselem Dakhli 1 , 1 H`ail University, community college, Kingdom of Saudi Arabia maher.jbeli@gmail.com, aalmahirah@yahoo.com, abdesselemdakhli@gmail.com Abstract—In this study, we present an approach that is used to build an optimized a Beta Wavelet Neural Network(CBWNN). This approach uses a Multi Library Wavelet Function (MLWF) containing a finite number of wavelet functions. These are selected using a method called Least Trimmed Square (LTS). During the learning phase this method chooses the wavelet candidates to feed the construction of an optimized and efficient wavelet network. This phase is reinforced by the use of the wavelet functions found in the Multi Library Wave- let Function. Numerical experience is given to validate the application of this network of wavelets. Experimental results show that the proposed approach is very efficient and precise. Our system consists of three steps. The first is the construction of the multi-wavelet library. The second step is the construction of an optimized wavelet network using the multi-library and the Least Trimmed Square (LTS) method. Finally, the third concerns the validation of our approach. Keywords- Wavelet Neural Networks; LTS; Multi Library Wavelet; Beta wavelets. I. INTRODUCTION Wavelet Neural networks (WNNs) are a new class of networks which have been used with great success in a wide range of application. The satisfying performance of the Wavelet Neural Networks (WNN) depends on an appropriate determination of the WNN structure optimization problem. The Wavelet Neural Networks are used to solve several prob- lems for example classification, compression, recognition, approximation of a function. The latter makes it possible to estimate the underlying relationship from a set of input- output data constituting the fundamental problem of various applications, for example the classification of models, the extraction of data, the reconstruction of signals and identifi- cation of systems [1, 2]. In the literature [3,4], the feed forward neural network is applied as a method to resolve the interpolation and the func- tion adjustment problems. However, in several studies [5,6], it has been shown that the ability to approximation using neural network depends on kind of training algorithm, such as, the BP algorithm. However, the learning algorithms are often eventually turned into optimization problems. The neu- ral network learning by the BP algorithm often causes the instability of this network. Recently, problems of approximation of univariate func- tions have been studied by constructive feed forward neural networks. The use of these networks for multivariate func- tions has had a limit which concerns the convergence condi- tions and the actual operation which becomes relatively diffi- cult [7,8,9,10]. Miao [11] proved that the connection weight of RBF neural networks can be obtained through several learning algorithms. Therefore, the weight has some instabil- ity. In 1991, Kreinovich et al. were proved that if it is a neu- ron which implements a non-linear function g (x) and neu- rons which approximate arbitrary linear functions, then for each continuous function f (x1, ..., xm) there is arbitrarily much of variables and for an arbitrary> 0 we can build a network composed of g-neurons and linear neurons and we evaluate f accurately e [12]. Other literatures [14, 15, 16] have used constructive feed- forward RBF neural networks to solve approximation prob- lems of quasi-interpolations. Mulero Martínez [17] applied Gaussian RBF neural networks with uniformly spaced nodes. The obtained results showed a better approximation. Ferrari et al. [19,18] have addressed the problem of multi-scale ap- proximation with the use of hierarchical RBF neural net- works. But these methods have the same defects of the BP algorithm. They are either unstable or complicated and slow. The wavelet network training is performed by several al- gorithms that are used to calculate the network parameters. These parameters are the biases and the weights, the parame- ters of the wavelet function (translation and dilation parame- ters). Nevertheless, there are numerous studies on training WNNs. Derivative-based learning methods including Gradi- ent Descent [19], Back Propagation [20], etc. are the most frequently-used methods in the previous works of WNN training. In addition, derivative-free methods, as evolutionary al- gorithms [21, 22, 23, 24], have also been previously used. The learning method proposed in the well-known Back prop- agation method for neural networks. Abiyev and Kaynak [25] International Journal of Computer Science and Information Security (IJCSIS), Vol. 18, No. 3, March 2020 1 https://sites.google.com/site/ijcsis/ ISSN 1947-5500