Hybrid ANN reducing training time requirements and decision delay for equalization in presence of co-channel interference Siba Prasada Panigrahi a, * , Santanu Kumar Nayak b , Sasmita Kumari Padhy b a KEC, Electrical Engineering, Kausalyagang, Bhubaneswar, India b Berhampur University, India Received 24 March 2007; received in revised form 2 November 2007; accepted 4 December 2007 Available online 14 December 2007 Abstract Bayesian equalizer is known to be the optimum equalizer. This paper proposes a Hybrid Artificial Neural Network (Hybrid ANN) and an algorithm to modify Decision Feedback Equalizer (DFE) function of Bayesian equalizer while equalizing in presence of co-channel interference (CCI). A combination of Artificial Neural Network and Decision Feedback Equalizer (DFE) is termed as Neural-DFE (NDFE). The results show that the decision delay and training time requirement reduces significantly by use of NDFE. This creates an advantage specifically for a mobile environment where the CCI is varying in nature and the Bayesian equalizer requires a lot of training time. # 2007 Elsevier B.V. All rights reserved. Keywords: Channel equalization; Co-channel interference; Hybrid ANN 1. Introduction In this paper we propose a hybrid network consisting of two ANNs (ANN-I and ANN-II). They can represent the ‘+1’ and 1’ on the digital communication system. Proposed network is with one output node in each ANN for the ‘+1’ and ‘1’ and four input nodes as independent variables (Fig. 1), 1, p, q i , and n. The output of ANN-I, q i , is the input to the ANN-II and vice- versa. The p and n inputs are the discrete values for the space and time components, respectively. These discrete inputs form a2 M matrix, with M = max(P, N). Use of simple ‘‘do ( f or)—loops’’ give these matrix elements, on line. This three- layered feed forward network uses a mean variance type connection [3] for the input-to-hidden layer. This is similar to a RBF neural network. The input to each neuron, in a layer, is the outputs of the neurons of previous layer multiplied by a set (two sets for the hidden layer) of weighting factors. A tanh and a linear activation functions give, respectively, the output of a hidden neuron and the output neuron. The ANN training uses the back-propagation algorithm. The adaptability of the network is the discrepancy between the network output and the desired output. The number of neurons in the hidden layer is to be such that over training is avoided and good accuracy is obtained for testing data. The minimum number of such neurons is [(number of input layer neurons) (number of output layer neurons)] [1]. A combination of Artificial Neural Network and the Decision Feedback Equalizer (DFE) is termed as NDFE, reduces the storage and time requirements [1]. This paper is organized into five sections. Section 2 introduces the proposed model. Section 3 introduces proposed algorithm. Section 4 provides simulation results and discus- sions. Section 5 provides concluding remarks. 2. The model The error using network model must be minimized for the ANN to be an acceptable solution. The DFE expression incorporates the initial condition in it. So, determining the error in the channel at the interior points is essential. To obtain this, it is necessary to use the neural network approximation for the channel, on both sides of formulated equation and take their difference. This gives the total error. The cost function is the square of this error. The weights determination is through minimization (with respect to each weight) of the cost function. Through this formulation the DFE algorithm is embedded into the Neural Network. The mean and variance connection www.elsevier.com/locate/asoc Available online at www.sciencedirect.com Applied Soft Computing 8 (2008) 1536–1538 * Corresponding author. Tel.: +91 6803206678. E-mail addresses: siba_panigrahy15@rediffmail.com (S.P. Panigrahi), sknayakbu@rediffmail.com (S.K. Nayak), chavisiba@rediffmail.com (S.K. Padhy). 1568-4946/$ – see front matter # 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.asoc.2007.12.001