Copyright © 2003 John Wiley & Sons, Ltd.
Bayesian Analysis of Fractionally
Integrated ARMA with Additive Noise
NAN-JUNG HSU
1
* AND F. JAY BREIDT
2
1
Institute of Statistics, National Tsing-Hua University, Taiwan
2
Department of Statistics, Colorado State University, USA
ABSTRACT
A new sampling-based Bayesian approach for fractionally integrated autore-
gressive moving average (ARFIMA) processes is presented. A particular
type of ARMA process is used as an approximation for the ARFIMA in a
Metropolis–Hastings algorithm, and then importance sampling is used to adjust
for the approximation error. This algorithm is relatively time-efficient because
of fast convergence in the sampling procedures and fewer computations than
competitors. Its frequentist properties are investigated through a simulation
study. The performance of the posterior means is quite comparable to that of
the maximum likelihood estimators for small samples, but the algorithm can
be extended easily to a variety of related processes, including ARFIMA plus
short-memory noise. The methodology is illustrated using the Nile River data.
Copyright © 2003 John Wiley & Sons, Ltd.
key words importance sampling; Kalman smoothing algorithm; long
memory; Markov chain Monte Carlo
INTRODUCTION
The phenomenon of long memory has been found in economics, hydrology, geophysics, and
many other fields. For time series of this type, the correlations between distant observations decay
at a rate slower than the geometric decay of autoregressive moving average (ARMA) processes.
Among all parametric models, the class of fractionally integrated autoregressive moving average
(ARFIMA) processes is used most widely in applications to characterize long-memory time series
(e.g. Geweke and Porter-Hudak, 1983; Hosking, 1984; Diebold and Rudebusch, 1989; Haslett and
Raftery, 1989).
For ARFIMA models, maximum likelihood estimators (MLEs) have been shown to be asymptoti-
cally efficient and normal under mild regularity conditions (Yajima, 1985; Dahlhaus, 1989). Sowell
(1992) derived an algorithm to evaluate the exact likelihood function recursively. The algorithm
involves the computation of a complicated autocovariance matrix and its inverse and suffers from
convergence problems, which are particularly serious for parameters near the boundary. An alterna-
tive to obtaining exact MLEs is to maximize an approximation of the likelihood function, such as
Journal of Forecasting
J. Forecast. 22, 491–514 (2003)
Published online 19 September 2003 in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/for.870
*Correspondence to: Nan-Jung Hsu, Institute of Statistics, National Tsing-Hua University, Hsin-Chu, Taiwan 30043. E-mail:
njhsu@stat.nthw.edu.tw