New quasi-Newton methods for unconstrained optimization problems Zengxin Wei a,1 , Guoyin Li a, * , Liqun Qi b,2 a Department of Mathematics and Information Science, Guangxi University, Nanning, Guangxi, PR China b Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong Abstract Many methods for solving minimization problems are variants of Newton method, which requires the specification of the Hessian matrix of second derivatives. Quasi- Newton methods are intended for the situation where the Hessian is expensive or diffi- cult to calculate. Quasi-Newton methods use only first derivatives to build an approx- imate Hessian over a number of iterations. This approximation is updated each iteration by a matrix of low rank. In unconstrained minimization, the original quasi-Newton equation is B k+1 s k = y k , where y k is the difference of the gradients at the last two iterates. In this paper, we first propose a new quasi-Newton equation B kþ1 s k ¼ y k in which y k is decided by the sum of y k and A k s k where A k is some matrix. Then we give two choices of A k which carry some second order information from the Hessian of the objective 0096-3003/$ - see front matter Ó 2005 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2005.08.027 * Corresponding author. E-mail addresses: zxwei@gxu.edu.cn (Z. Wei), gyli@math.cuhk.edu.hk (G. Li), maqilq@polyu. edu.hk (L. Qi). 1 The work of this author was done during his visit to the Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong. His work is supported by the Croucher Foundation of Hong Kong, Chinese NSF grants 10161002 and Guangxi NSF grants 9811020. 2 The work of this author is supported by the Research Grant Council of Hong Kong. Applied Mathematics and Computation 175 (2006) 1156–1188 www.elsevier.com/locate/amc