Applied Mathematics and Computation 341 (2019) 15–30
Contents lists available at ScienceDirect
Applied Mathematics and Computation
journal homepage: www.elsevier.com/locate/amc
A reduced-space line-search method for unconstrained
optimization via random descent directions
Elias D. Nino-Ruiz
∗
, Carlos Ardila , Jesus Estrada , Jose Capacho
Applied Math and Computer Science Laboratory, Department of Computer Science, Universidad del Norte, Barranquilla 080001, Colombia
a r t i c l e i n f o
MSC:
49K10
49M05
49M15
Keywords:
Reduced-space optimization
Line search
Random descent directions
a b s t r a c t
In this paper, we propose an iterative method based on reduced-space approximations
for unconstrained optimization problems. The method works as follows: among iterations,
samples are taken about the current solution by using, for instance, a Normal distribution;
for all samples, gradients are computed (approximated) in order to build reduced-spaces
onto which descent directions of cost functions are estimated. By using such directions,
intermediate solutions are updated. The overall process is repeated until some stopping
criterion is satisfied. The convergence of the proposed method is theoretically proven by
using classic assumptions in the line search context. Experimental tests are performed by
using well-known benchmark optimization problems and a non-linear data assimilation
problem. The results reveal that, as the number of sample points increase, gradient norms
go faster towards zero and even more, in the data assimilation context, error norms are
decreased by several order of magnitudes with regard to prior errors when the assimila-
tion step is performed by means of the proposed formulation.
© 2018 Elsevier Inc. All rights reserved.
1. Introduction
Many real-life problems in different sciences and fields can be reduced to optimization problems of the form:
x
∗
= arg min
x
f (x) , (1)
where x ∈ R
n×1
is a vector of state variables in an n-dimensional Euclidean space, and f : R
n×1
→ R is a differentiable
function. For instance, in data assimilation [1], posterior modes of error distributions are obtained by the maximization of
posterior error distributions [2]:
x
a
= arg min
x∈R
q×1
− P (x|y) ,
where x
a
∈ R
q×1
is known as the analysis state, n is the number of variables (in this context, model resolution), y ∈ R
r×1
is
the observation, r is the number of observed components from the numerical grid, and P (x|y) stands for the kernel of the
analysis distribution. Also, in the context of inverse problems, non-linear models [3]
y = g
(
x, θ
)
+ κ ,
∗
Corresponding author.
E-mail addresses: enino@uninorte.edu.co (E.D. Nino-Ruiz), cardila@uninorte.edu.co (C. Ardila), jesusdavide@uninorte.edu.co (J. Estrada),
jcapacho@uninorte.edu.co (J. Capacho).
URL: https://sites.google.com/a/vt.edu/eliasnino/ (E.D. Nino-Ruiz)
https://doi.org/10.1016/j.amc.2018.08.020
0096-3003/© 2018 Elsevier Inc. All rights reserved.