A Fast Neural Algorithm for Serial Code Detection in a Stream of Sequential Data Hazem M. El-Bakry, and Qiangfu Zhao Abstract—In recent years, fast neural networks for object/face detection have been introduced based on cross correlation in the frequency domain between the input matrix and the hidden weights of neural networks. In our previous papers [3,4], fast neural networks for certain code detection was introduced. It was proved in [10] that for fast neural networks to give the same correct results as conventional neural networks, both the weights of neural networks and the input matrix must be symmetric. This condition made those fast neural networks slower than conventional neural networks. Another symmetric form for the input matrix was introduced in [1-9] to speed up the operation of these fast neural networks. Here, corrections for the cross correlation equations (given in [13,15,16]) to compensate for the symmetry condition are presented. After these corrections, it is proved mathematically that the number of computation steps required for fast neural networks is less than that needed by classical neural networks. Furthermore, there is no need for converting the input data into symmetric form. Moreover, such new idea is applied to increase the speed of neural networks in case of processing complex values. Simulation results after these corrections using MATLAB confirm the theoretical computations. KeywordsFast Code/Data Detection, Neural Networks, Cross Correlation, real/complex values. I. INTRODUCTION ECENTLY, neural networks have shown very good results for detecting two dimensional sub-image in a given image [11,12,14]. Some authors tried to speed up the detection process of neural networks [13,15,16]. They proposed a multilayer perceptron (MLP) algorithm for fast object/face detection based on cross correlation in the frequency domain between the input image and the hidden weights of neural networks. Then, they established an equation for the speed up ratio. It was proved in [1-12] that their equations contain many errors, which lead to invalid speed up ratio. Manuscript received October 21, 2004. H. M. El-Bakry, is assistant lecturer with Faculty of Computer Science and Information Systems – Mansoura University – Egypt. Now, he is PhD student in University of Aizu, Aizu Wakamatsu City, Japan 965-8580 (phone: +81-242-37-2760, fax: +81-242-37-2743, e-mail: d8071106@u-aizu.ac.jp). Q. Zhao is professor with the Information Systems Department, University of Aizu, Japan (e-mail: qf-zhao@u-aizu.ac.jp). Here, another error in the definition of cross correlation equation presented in [13,15,16] is presented. In [1-10] a symmetry condition was introduced in both the input matrix (image) and the weights of neural networks to compensate for this error. This symmetry condition allowed those fast neural networks to give the same correct results as conventional neural network for detecting sub-matrix in a given large input matrix. In [3,4], the same principle was used for fast detecting a certain code/data in a given one dimensional matrix (sequential data). This was done by converting the input matrices into symmetric forms. In this paper, corrections for the errors in cross correlation equations introduced in [13,15,16] are presented. Theoretical and practical results after these corrections prove that our proposed fast neural algorithm is faster than the previous algorithms as well as classical neural networks. In section II, fast neural networks for code/data detection are described. The correct fast neural algorithm for detecting a certain code/data in a given one dimensional sequential data is presented in section III. This algorithm can be applied for communication applications. Here, such algorithm is used for increasing the speed of neural networks dealing with complex values. The new fast neural networks with real/complex successive input values will be presented in section IV. II. THEORY OF FAST NEURAL NETS BASED ON CROSS CORRELATION IN THE FREQUENCY DOMAIN FOR SEQUENTIAL DATA DETECTION Finding a certain code/data in the input one dimensional matrix is a searching problem. Each position in the input matrix is tested for the presence or absence of the required code/data. At each position in the input matrix, each sub- matrix is multiplied by a window of weights, which has the same size as the sub-matrix. The outputs of neurons in the hidden layer are multiplied by the weights of the output layer. When the final output is high, this means that the sub-matrix under test contains the required code/data and vice versa. Thus, we may conclude that this searching problem is a cross correlation between the matrix under test and the weights of the hidden neurons. The convolution theorem in mathematical analysis says that a convolution of f with h is identical to the result of the following steps: let F and H be the results of the Fourier Transformation of f and h in the frequency domain. Multiply F and H in the frequency domain point by point and then transform this product into the spatial domain via the inverse R International Journal of Information Technology Volume 2 Number 2 71