International Journal of Computer Theory and Engineering, Vol. 1, No. 4, October, 2009 1793-8201 - 398 - Abstract—A high order feed forward neural network architecture with optimum number of nodes is used for adaptive channel equalization in this paper.The replacement of summation at each node by multiplication results in more powerful mapping because of its capability of processing higher-order information from training data. The equalizer is tested on Rayleigh fading channel with BPSK signals. Performance comparison with recurrent radial basis function (RRBF) neural network show that the proposed equalizer provides compact architecture and satisfactory results in terms of bit error rate performance at various levels of signal to noise ratios for a Rayleigh fading channel. Index Terms—channel equalization, BPSK signal, multiplicative neuron, Rayleigh channel. I. INTRODUCTION As higher-level modulation becomes more desirable to cope with the need for high-speed data transmission, nonlinear distortion becomes a major factor, which limits the data carrying capacity of digital communication sytems. Thermal noise, impulse noise, cross talk and the nature of the channel itself distort the transmitted data in amplitude and phase due to which temporal spreading and consequent overlap of individual pulses occurs. The presence of inter symbol interference (ISI) in the system introduces errors in the decision device at the receiver output. Therefore, in the design of the transmitting and receiving filters, the objective is to minimize the effects of ISI, and thereby deliver the digital data to its destination with the smallest error possible. Equalizers modelled as adaptive digital filters which shape the receiver’s transfer function are ubiquitous in todays signal processing applications to combat ISI in dispersive channels. Adaptive filters achieve desired spectral characteristics of a signal by altering the filter coefficients and thereby the filter response according to a recursive optimization algorithm. Adaptive coefficients are required since some parameters of the desired processing operation (for instance, the properties of some noise signal) are not known in advance [1]. When significant noise is added to the transmitted signal linear boundaries are not optimal. The received signal at each Manuscript received may 15, 2009. Kavita burse is a research scholar in department of electronics and communication at maulana azad national institute of technology, bhopal, india. (phone: +919893141968; fax: +91-755-2734694). Dr. R.n. Yadav and dr. S.c. Shrivastava are with the department of electronics and communication, maulana azad national institute of technology, bhopal, india sample instant may be considered as a nonlinear function of the past values of the transmitted symbols. Further, since the nonlinear distortion varies with time and from place to place, effectively the overall channel response becomes a nonlinear dynamic mapping and the problem is tackled using classification techniques. As shown in a wide range of engineering applications, neural network (NN) has been successfully used for modeling complex nonlinear systems and forecasting signal with relatively simple architecture [2]-[4]. A wide range of neural architectures are available for modeling the nonlinear phenomenon of channel equalization. Feed forward networks like multilayer perceptron (MLP) which contain an input layer, an output layer and one or more hidden layers possess nonlinear processing capabilities and universal approximation characteristic and have been successfully implemented as channel equalizers [5]-[7]. The back propagation which is a supervised learning algorithm is used as a training algorithm [8]. These neuron models process the neural inputs using the summing operation. Recently, higher-order networks have drawn great attention from researchers due to their superior performance in nonlinear input-output mapping, function approximation, and memory storage capacity. Some examples are Product unit neural network (PUNN), Sigma-Pi network (SPN), Pi-Sigma network (PSN) etc. They allow neural networks to learn multiplicative interactions of arbitrary degree. Multiplication plays an important role in neural modeling of biological behavior and in computing and learning with artificial neural networks. The multiplicative neuron contains units which multiply their inputs instead of summing them and thus allow inputs to interact nonlinearly. Multiplicative node functions allow direct computing of polynomials inputs and approximate higher order functions with fewer nodes. Thus they may present better approximation capability and faster learning times than the classical MLP (which incorporate additive neurons only) because of their capability of processing higher-order information from training data [9]-[11]. The remaining of the paper is organized as follows: section II describes the basic adaptive channel equalizer scheme. In section III learning rule for multiplicative neuron is derived, section IV provides the simulation and results and section V concludes the paper. II. ADAPTIVE CHANNEL EQUALIZATION The block diagram of adaptive equalization in Fig. 1 is described as follows. The external time dependant inputs consist of the sum of the desired signal d(k), the channel nonlinearity NL and the interfering noise v(k). The adaptive Nonlinear Fading Channel Equalization of BPSK Signals Using Multiplicative Neuron Model Kavita Burse, R. N. Yadav and S. C. Shrivastava