Journal of Mathematics Research; Vol. 13, No. 2; April 2021 ISSN 1916-9795 E-ISSN 1916-9809 Published by Canadian Center of Science and Education Superlinear Convergence of a Modified Newton’s Method for Convex Optimization Problems With Constraints Bouchta RHANIZAR Correspondence: Department of Mathematics, Ecole Normale Suprieure, University Mohammed V of Rabat, Morocco. Received: January 14, 2021 Accepted: March 16, 2021 Online Published: March 24, 2021 doi:10.5539/jmr.v13n2p90 URL: https://doi.org/10.5539/jmr.v13n2p90 Abstract We consider the constrained optimization problem defined by: f ( x ) = min xX f ( x) (1) where the function f : R n −→ R is convex on a closed bounded convex set X. To solve problem (1), most methods transform this problem into a problem without constraints, either by introducing Lagrange multipliers or a projection method. The purpose of this paper is to give a new method to solve some constrained optimization problems, based on the definition of a descent direction and a step while remaining in the X convex domain. A convergence theorem is proven. The paper ends with some numerical examples. Keywords: nonlinear optimization, modified Newton’s method 1. Introduction In applied mathematics such as in many scientific fields, we are often led to solve nonlinear optimization problems with constraints. Several authors have studied the solution of nonlinear optimization problems with constraints, such as (Dennis & Schnabel, 1983; Ortega & Rheinboldt, 1970; Laurent, 1972; Culioli, 1994; Rhanizar, 2002; Rhanizar, 2020). Among the methods used to solve the problem (1) by transforming it to an unconstrained problem, we can cite the projection methods defined by : x k+1 = P X ( x k α k f ( x k )) with || x P X ( x)|| = min yX || x y|| This method is only applicable if one can easily compute the projection P X , for example if X = m i=1 [a i , b i ] is a block of R n . But if X is defined by constrained inequalities, it is not easy in general to use this method. We also find the external penalization method which introduces a function: ψ : R n −→ R having the following properties: ψ is continuous and convex ψ( x) 0 x R n ψ( x) = 0 ⇐⇒ x X The method considers ǫ> 0 a function f ǫ : R n −→ R defined by: f ǫ ( x) = f ( x) + 1 ǫ ψ( x) The method consists in minimizing f ǫ ( x) on R n with ǫ tending to 0. This method is applicable if it is easy to build a function ψ with its properties. 90