Journal of Global Optimization 21: 223–237, 2001.
© 2001 Kluwer Academic Publishers. Printed in the Netherlands.
223
Finding GM-estimators with global
optimization techniques
RAFAEL BLANQUERO, EMILIO CARRIZOSA and EDUARDO CONDE
Departamento de Estadística e Investigación Operativa, Universidad de Sevilla, Sevilla, Spain
(e-mail: rblanque@cica.es; ecarriz@cica.es; educon@cica.es)
Abstract. In this note we address the problem of finding the GM-estimator for the location parameter
of a univariate random variable. When this problem is non-convex but d.c. one can use a standard
covering method, which, in the one-dimensional case has a simple form. In this paper we exploit
the structure of the problem in order to obtain d.c. decompositions with certain optimality proper-
ties in the application of the algorithm. Numerical results show that this general-purpose algorithm
outperforms previous ad-hoc methods for this problem.
Key words: GM-estimators, Robust estimation, D.C. optimization, Covering methods
1. The model
Given a sample of n observations y
1
,y
2
,...,y
n
the determination of a parameter
that, in some sense, represents the data is a classical problem in Statistics. In the last
decades the traditional least-squares method has been more and more frequently
replaced by other approaches with better properties of robustness [19].
In particular, an M-estimator [14, 15], is an optimal solution of an optimization
program of the form
inf
θ ∈R
1j n
ρ(r
j
(θ)), (1)
where (r
1
(θ),...,r
n
(θ)) is the vector of residuals,
r
j
(θ) = y
j
- θ j = 1, 2,...,n
and ρ : R -→ R is some continuous, even and nondecreasing function in R
+
.
The class of M-estimators has been further enlarged to the class of so-called
GM-estimators (generalized M-estimators), in which the influence of each residual
is made dependent on the observation y
j
. In other words, θ
∗
is said to be a GM-
estimator if it solves an optimization problem of the form
inf
θ ∈R
σ(θ), (2)
with
σ(θ) =
1j n
ρ
j
(θ),