International Research Journal of Applied and Basic Sciences
© 2013 Available online at www.irjabs.com
ISSN 2251-838X / Vol, 7 (9): 522-527
Science Explorer Publications
Surveying the Characteristics of Population Monte
Carlo
Ehsan Fayyazi
1*
, Gholamhossein Gholami
2
1. Department of Statistics, Science and Research Branch, Islamic Azad University, Fars, Iran.
2. Department of Mathematics, Faculty of sciences, Urmia University, Urmia, Iran.
Corresponding Author email: e.fayyazi@fsriau.ac.ir
ABSTRACT: The importance sampling method as other Monte Carlo Markov Chain (MCMC)
algorithms is iterative while this algorithm i.e. importance sampling does not depend on the initiating
point. The Population Monte Carlo method includes the frequent production of importance sampling
whose used importance functions depend on the previous produced importance samples. The
advantage of this method over the MCMC algorithm is that the framework of this algorithm in each
iteration is unbiased, so running this algorithm can stop at any given time. The reason is that the
iterations improve running the importance function (i.e. the proposal distribution). Hence, this leads
to the improved importance sampling. In this study, we survey this method through diverse
examples.
Keywords: Population Monte Carlo, Importance sampling, Monte Carlo Markov Chain (MCMC),
mixed models, Metropolis-Hastings algorithms.
INTRODUCTION
This study suggests a method named Population Monte Carlo (PMC) which is the combination of
Monte Carlo Chain methods, importance sampling, and importance resampling. The method takes the
advantages of every of these methods. In doing so, we describe the extension of importance sampling, andthen
suggest the population Monte Carlo.
Population Monte Carlo
Population Monte Carlo (PMC) algorithm is an iterative importance sampling method which produces in
each iteration the stimulated approximate sample from the target distribution and the adaptive algorithm which
arranges the proposal distribution with the target distribution throughout the iterations. So, the theoretical basis
of this method rootin the importance sampling rather than in MCMC and despite the iterated characteristics
(that is, unbiased at least to the order O(1/n)), the estimation of target distribution is valid in each iteration and
does not require the convergence times and stopping rules.
simulating the sample
Considering the MCMC, stationary distribution has been taken into account as a limit distribution from
Markovsequenceas{ }, having this experimental result is large enough for X to t. A rather simple expansion of
this perspective is that instead of simulating a distribution point of π we embark on simulating the n number
sample distributed from π. In other words, one would simulate n number from the following
π
⊗
(x ,…,x )= π(x )
The expansion in [4] and [5] accompanied by developed programming for nMCMC parallel running has
been argued. In fact, one would use this complete sample int iteration for devising a proposal distribution in
+1 iteration.
General importance sampling
The PMC algorithm can be considered in more general framework: one would assume
differentproposal distributions in each iteration and for each particle in this algorithm. In other words, if i is the
sample indices and t the iteration indices, then X
()
can be simulated from q distributions which might depend
on the previous samples while being independent from other samples (to be conditioned on other samples)
X
()
~q (x)