Treatment of skewed multi-dimensional training data to facilitate the task of engineering neural models H. Altun 1 , A. Bilgil 2 and B. C. Fidan 3 1 Corresponding Author. Department of Electrical&Electronics Engineering, Nigde University, Nigde, Turkey, e-mail: haltun@nigde.edu.tr Tel: +90-388 2252281 Fax +90- 388-2250112 2 Civil Engineering Department, Nigde, University, Nigde, Turkey, e-mail: abilgil@nigde.edu.tr 3 Faculty of Electrical&Electronics Engineering, Department of Electrical Engineering, Yıldız Technical Univeristy, 34349 Beşiktaş / İstanbul, Turkey, email: fidan@yildiz.edu.tr Abstract: Successful application of neural network models relies heavily on problem- dependent internal parameters. As the theory does not facilitate the choice of the optimal parameters of neural models, these can solely be obtained through a tedious trial-and-error process. The process requires performing multiple training simulations with various network parameters, until satisfactory performance criteria of a neural model are met. In literature, it has been shown that neural models are not consistently good in prediction under highly skewed data. Consequently, the cost of engineering neural models rises in such circumstance to seek for appropriate internal parameters. In this paper the aim is to show that a recently proposed treatment of highly skewed data eases the task of practitioners in engineering neural network models to meet satisfactory performance criteria. As the applications of neural models grows dramatically in diverse engineering domains, the understanding of the treatment show indispensable practical values. Keywords: multi-dimensional data treatment; skewness; artificial neural networks; multilayered perceptron; backpropagation; suspended sediment prediction 1. Introduction A remarkable increase in the employment of Artificial Neural Networks (ANN) in recent decade has been witnessed in science and engineering disciplines. ANNs are seen to be a highly promising technology in modeling due to their ability to learn system behavior under inspection from samples and become a regular method to provide solution in optimization, regression and estimation problems. Multi Layered Perceptron (MLP) neural networks have been widely used to model the complex interdependencies and phenomena that appear in engineering applications. At the practical level, the standard approach to using neural models requires a finite set of training examples and the setting of a certain number of network parameters which have to be fixed a priori. For a successful application of MLP neural networks, one should determine internal parameters, such as initial weights and network structure, to meet required performance criteria. (Ghedira et al. 2004, Cigizoglu and Alp, 2006). In engineering neural network models, this is one of the main problems, as an inadequate network would be unable to learn. The problem of finding a suitable architecture and the corresponding weights of the network, however, is a very complex task (Saxén and Pettersson, 2006, García-Pedrajas et al. 2005) and main difficulties arises from the fact that the theory does not provide instruments to choice optimal values for these parameters. This, in turn, results