Journal of Mathematics and Statistics 6 (3): 246-252, 2010
ISSN 1549-3644
© 2010 Science Publications
Corresponding Author: M.Y. Waziri, Department of Mathematics, Faculty of Science,
University Putra Malaysia 43400 Serdang, Malaysia
246
A New Newton’s Method with Diagonal Jacobian Approximation
for Systems of Nonlinear Equations
M.Y. Waziri, W.J. Leong, M.A. Hassan and M. Monsi
Department of Mathematics, Faculty of Science,
University Putra Malaysia 43400 Serdang, Malaysia
Abstract: Problem statement: The major weaknesses of Newton method for nonlinear equations
entail computation of Jacobian matrix and solving systems of n linear equations in each of the
iterations. Approach: In some extent function derivatives are quit costly and Jacobian is
computationally expensive which requires evaluation (storage) of n×n matrix in every iteration.
Results: This storage requirement became unrealistic when n becomes large. We proposed a new
method that approximates Jacobian into diagonal matrix which aims at reducing the storage
requirement, computational cost and CPU time, as well as avoiding solving n linear equations in each
iterations. Conclusion/Recommendations: The proposed method is significantly cheaper than
Newton’s method and very much faster than fixed Newton’s method also suitable for small, medium
and large scale nonlinear systems with dense or sparse Jacobian. Numerical experiments were carried
out which shows that, the proposed method is very encouraging.
Key words: Nonlinear equations, large scale systems, Newton’s method, diagonal updating, Jacobian
approximation
INTRODUCTION
Consider the system of nonlinear equations:
F(x) = 0 (1)
where, F(x) : R
n
→ R
n
with the following properties:
• There exist x* with F(x*) = 0
• F is continuously differentiable in a neighbourhood
of x*
•
F
F'(x*) = J (x*) 0 ≠
The most well-known method for solving (1), is the
classical Newton’s method. However, the Newton’s
method for nonlinear equations has the following
general form: Given an initial point x
0
, we compute a
sequence of corrections {s
k
} and iterates {x
k
} as
follows:
Algorithm CN (Newton’s method): where, k = 0, 1,
2... and J
F
(x
k
) is the Jacobian matrix of F, then:
Stage 1: Solve
F k
J (x ) s
k
= -F(x
k
)
Stage 2: Update x
k+1
= x
k
+ s
k
Stage 3: Repeat 1-2 until converges.
The convergence of Algorithm CN is attractive.
However, the method depends on a good starting point
(Dennis, 1983). Newton’s method will converges to x*
provided the initial guess x
0
is sufficiently close to the
x* and J
F
(x*) ≠ 0 with J
F
(x) Lipchitz continuous and
the rate is quadratic (Dennis, 1983), i.e.:
k1 k
x x* hx x*
+
- ≤ - (2)
For some h.
Even though it has good qualities, CN method has
some major shortfalls as the dimension of the systems
increases which includes (Dennis, 1983) for details):
• Computation and storage of Jacobian in each
iteration
• Solving system of n linear equations in each
iteration
• More CPU time consumption as the equations
dimension increases
There are several strategies to overcome the above
drawbacks. The first is fixed Newton method, i.e., by
setting J
F
(x
k
) ≡ J
F
(x
0
) for k>0. Fixed Newton is the
easiest and simplest strategy to overcome the shortfalls