IOSR Journal of Mathematics (IOSR-JM) e-ISSN: 2278-5728, p-ISSN:2319-765X. Volume 9, Issue 5 (Jan. 2014), PP 30-35 www.iosrjournals.org www.iosrjournals.org 30 | Page On Application Of Modified Gradient To Extended Conjugate Gradient Method Algorithm For Solving Optimal Control Problems 1 K. J. ADEBAYO, 2 F. M. ADERIBIGBE and 3 M. F. AKINMUYISE 1&2 Department of Mathematical Sciences, Ekiti State University, Ado Ekiti, Nigeria. 3 Department of Mathematics, Adeyemi College of Education, Ondo, Ondo State, Nigeria. Abstract: This paper investigates and discusses modification on the gradient of continuous function input introduced to Extended Conjugate Gradient Method algorithm (ECGM) employed in solving optimal control problems with the state variable constrained by a differential equation. Numerical results show some improvement over the classical methods. Keywords: Optimal Control, Control Operator, Conjugate Gradient Method and Extended Conjugate Gradient Method. I. Introduction One of the commonly used technique for solving systems of linear equations is Gaussian elimination. It is referred to as a “direct” method because it determines the solution in a fixed number of arithmetic operations that can be predicted in advance. “Iterative” methods, on the other hand, do not have fixed costs since the solution is obtained from a sequence of approximate solutions, and the algorithm is terminated when some measure of the error has been made adequately small. However, iterative methods are valuable tool for solving large systems of linear equations. They have several potential advantages over direct method. First, since the coefficient matrix need not be factored, there are no fill-in and loss of sparsely. Second, storage requirements are more often lower for iterative methods than for direct methods. In some cases, it may not be necessary to store the coefficient matrix at all. Third, if a good approximation to the coefficient matrix is available, and this approximation can be inverted at low cost, then an iterative method can take advantage of this information to obtain the solution more rapidly. This is not normally possible with a direct method. A great many iterative methods have been invented, but we will only consider one ofsuch: the ConjugateGradient Method (CGM). (Many of the other iterative methods are applied primarily in the solution of differential equations.) The ConjugateGradient Method is designed to solve 1.1 In the case where the matrix A is symmetric and positive definite. It can be considered as a technique for solving the equivalent problem Problem (P1) Minimize f (x) = 1.2 The Conjugate Gradient Method (CGM) is a variant of the gradient method. In its simplest form, the gradient method uses the iterative scheme: 1.3 to generate a sequence { } of vectors which converge to the minimum of . The parameter appearing in (1.3) denotes the step length of the descent direction sequence. In particular, if F is a function on a Hilbert space ℋ such that in ℋ, F admits a Taylor series expansion = 〈〉 ℋ 〈 〉 ℋ 1.4 where a, ϵ ℋ and is a positive definite, symmetric, linear operator, then it can be shown by [4] that possesses a unique minimum say in ℋ, and that = 0. The CGM algorithm for iteratively locating the minimum of in ℋ as described by [4] is as follows: Step 1: Guess the first element ϵ ℋ and compute the remaining members of the sequence with the aid of the formulae in the steps 2 through 6. Step 2: Compute the descent direction 1.5a Step 3: Set ; where = 〈 〉 ℋ 〈 〉 ℋ 1.5b Step 4: Compute 1.5c