Convergence times for parallel Markov chains eatrice Lachaud 1 and Bernard Ycart 2 1 Departament de Matem` atiques, Universitat Aut` onoma de Barcelona, lachaud@mat.uab.es 2 IMAG, Universit´ e Grenoble 1, bernard.ycart@imag.fr Summary. For the numerous applications of Markov chains (in particular MCMC methods), the problem of detecting an instant at which the convergence takes place is crucial. The ‘cut-off phenomenon’, or abrupt convergence, provides an answer to this problem. When a sample of Markov chains, or more generally of exponentially converging processes, is simulated in parallel, it remains far from its stationary distri- bution until a deterministic instant, and approaches it exponentially fast afterwards. The cut-off instant is explicitly known, and can be detected algorithmically using appropriate stopping times. The technique is illustrated on the Ornstein-Uhlenbeck diffusion. 1 Introduction The Monte Carlo Markov Chains methods are widely used nowadays in a huge area of applications fields (see [6, 11] for general references). They are adapted to the simulation of a given probability distribution ν , when the state space is too large for a direct simulation, even when ν is only known up to a proportionality constant. They consist in expressing the target distribution ν as the asymptotic distribution of a certain Markov chain, which is then run for long enough until it ‘reaches its equilibrium’. The crucial question is to define a stopping time. Among the different possibilities, we will focus on the parallelization method. It consists in running n independent copies of the chain, until that sample has reached stationarity. Indeed, contrarily to a single copy, the sample converges in a very abrupt way, for n large enough: before a certain deterministic time, the so-called ‘cut-off instant’, the sample remains far from its equilibrium, then it converges exponentially fast. Thus it is natural to stop the algorithm at the cut-off instant, provided this instant can be detected algorithmically. It turns out that, in many cases of practical interest, the cut-off instant is asymptotically equivalent to the ‘hitting time’, defined as the instant at which an empirical mean of the sample reaches its expected value for the first time. The hitting time can be computed on line.