Statistics and Probability Letters 78 (2008) 2685–2691
Contents lists available at ScienceDirect
Statistics and Probability Letters
journal homepage: www.elsevier.com/locate/stapro
Approximate predictive pivots for autoregressive processes
José M. Corcuera
Universitat de Barcelona, Facultat de Matemàtiques, Gran Via de les Corts, Catalanes 585, 08007 Barcelona, Spain
article info
Article history:
Received 7 December 2006
Received in revised form 2 February 2008
Accepted 14 March 2008
Available online 23 March 2008
MSC:
62M10
62M20
62G15
62E20
abstract
In this paper the author considers an autoregressive process where the parameters of the
process are unknown and try to obtain pivots for predicting future observations. If we do a
probabilistic prediction with the estimated model, where the parameters are estimated
by a sample of size n, we introduce an error of order n
−1
in the coverage probabilities
of the prediction intervals. However we can reduce the order of the error if we calibrate
adequately the estimated prediction bounds. The solution obtained can be expressed in
terms of an approximate predictive pivot.
© 2008 Elsevier B.V. All rights reserved.
1. Introduction
The general setting is of prediction of an absolutely continuous (a.c.) random variable Z based on the observation
y = (y
1
, y
2
,..., y
n
) corresponding to a random vector Y = (Y
1
, Y
2
,..., Y
n
), where the laws of Y and Z depend on a common
and unknown parameter θ ∈ Θ ⊂ R
d
. A prediction statement about Z is often given by prediction limits, i.e. real functions
K
α
(·) such that
P
θ
{Z ≤ K
α
(Y )}= α,
for every θ ∈ Θ and for any fixed α ∈ (0, 1). The above probability is usually called coverage probability and it is calculated
with respect to the joint density of Z and Y . Sometimes the existence of exact (predictive) pivotal quantities, that is of
functions of Z and Y whose distribution does not depend on θ, permit us to find an exact solution. But this is the exception.
Here we look for approximate prediction limits and predictive pivots. An approximate solution is to take K
α
(Y ) = q
α
(
ˆ
θ),
where q
α
(θ) is the α-quantile of the conditional distribution of Z given Y = y, that we also assume absolutely continuous,
and
ˆ
θ is an efficient estimator of θ. Note that, if we denote the conditional density of Z given Y = y, g(z; θ|y), then q
α
(
ˆ
θ)
will be the α-quantile of the so-called estimative density g(z;
ˆ
θ|y). However these predictions limits are usually imprecise,
having coverage error of order O(n
−1
), that is
P
θ
{Z ≤ q
α
(
ˆ
θ)}= α + O(n
−1
).
This is a well known result; indeed Barndorff-Nielsen and Cox (1996) suggest a way to correct these quantiles obtaining
prediction limits with a coverage error of order o(n
−1
). The solution can be expressed in terms of a predictive density whose
quantiles are precisely these predictions bounds. We will apply this method to the case where Y = (Y
1
, Y
2
,..., Y
n
) is such
that
Y
k+1
− μ =
p
j=1
φ
j
(Y
k−j+1
− μ) + ε
k+1
, k ∈ Z,
E-mail address: jmcorcuera@ub.edu.
0167-7152/$ – see front matter © 2008 Elsevier B.V. All rights reserved.
doi:10.1016/j.spl.2008.03.010