Journal of University of Duhok, Vol. 22, No.1 (Pure and Eng. Sciences), Pp 45-51, 2019 https://doi.org/10.26682/sjuod.2019.22.1.7 45 MODIFIED CONJUGATE GRADIENT METHOD FOR TRAINING NEURAL NETWORKS BASED ON LOGISTIC MAPPING ALAA LUQMAN IBRAHIM * and SALAH GAZI SHAREEF ** * Dept. of Mathematics, College of Science, University of Duhok, Duhok, Kurdistan Region-Iraq. ** Dept. of Mathematics, Faculty of Science, University of Zakho, Zakho, Kurdistan Region-Iraq. (Received: October 17, 2018; Accepted for Publication: March 25, 2019) ABSTRACT In this paper, we suggested a modified conjugate gradient method for training neural network which assurance the descent and the sufficient descent conditions. The global convergence of our proposed method has been studied. Finally, the test results present that, in general, the modified method is more superior and efficient when compared to other standard conjugate gradient methods. KEYWORDS: artificial neural networks, conjugate gradient, global convergence, descent and sufficient descent conditions. 1. INTRODUCTION Artificial neural networks (ANNs) are parallel computational samples consist of processing’s units and interconnected densely discriminated by an inherent propensity for learning from test and also discovering new knowledge. Because of their excellent ability of self-learning and self-adapting, they have been successfully applied in many aspects of artificial intelligence [2,6,7]. They are often found to be more active and precise than other classification techniques [3]. Although several different ways have been suggested, the feed forward neural networks (FNNs) are the most familiar and widely used in different kinds of applications. Training of neural networks (NNs) can be formulated as a problem of nonlinear unconstrained optimization. Therefore, the training procedure can be achieved by minimizing the error function , defined by the sum of square differences between the actual output of the FNN, pointed by and the wanted output, pointed by , relative to the appeared output, namely,         (1.1) where  is the vector network weights and the number of patterns used in the training set represented by . [8] one of the most important iterative methods for efficiently training neural networks in scientific and engineering computation is called conjugate gradient method (CG) because of their simplicity and their very low memory requirements [4,5,12,14,17]. The conjugate gradient method produce a sequence of weights  , is given by:    (1.2) where is the number of iteration generally called epoch,  is the learning rate and the search direction which is computed by:   and        , (1.3) where pointed to the gradient of   at the point and the scalar is a known as the coefficient of (CG). The parameter of the classical formula are determined as follows:   , Polak and Ribiere (PR) (1.4)   , Hestenes and Steifel (HS) (1.5)    , Fletcher and Reeves (FR) (1.6)     , Conjugate Descent (CD) (1.7)    , Dai and Yuan (DY) (1.8)    , Liu and Storey (LS) (1.9) for the above equation see [9,18,19,20,21,22]. The globally convergence of the above conjugate gradient methods has been studied by many authors with under some different line