ISSN 2277-3061 5274 | Page October 31, 2014 Enhancing the Performance of the BackPropagation for Deep Neural Network Ola Mohammed Surakhi 1 , Walid A. Salameh 2 1 Department of Computer Science Princess Summaya University for Science and Technology Amman, Jordan Ola.surakhi@gmail.com 2 Department of Computer Science Princess Summaya University for Science and Technology Amman, Jordan walid@psut.edu.jo ABSTRACT The standard Backpropagation Neural Network (BPNN) Algorithm is widely used in solving many real problems in world. But the backpropagation suffers from different difficulties such as the slow convergence and convergence to local minima. Many modifications have been proposed to improve the performance of the algorithm such as careful selection of initial weights and biases, learning rate, momentum, network topology and activation function. This paper will illustrate a new additional version of the Backpropagation algorithm. In fact, the new modification has been done on the error signal function by using deep neural networks with more than one hidden layers. Experiments have been made to compare and evaluate the convergence behavior of these training algorithms with two training problems: XOR, and the Iris plant classification. The results showed that the proposed algorithm has improved the classical Bp in terms of its efficiency. KEY WORDS: Neural Networks; Deep Neural Network; Backpropagation; Momentum; learning Rate; Optical Backpropagation; Extended Optical Bp; performance analysis Council for Innovative Research Peer Review Research Publishing System Journal: INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY Vol.13, No.12 www.ijctonline.com , editorijctonline@gmail.com