You are not currently logged in.
Access JSTOR through your library or other institution:
Adaptive Markov Chain Monte Carlo through Regeneration
Walter R. Gilks, Gareth O. Roberts and Sujit K. Sahu
Journal of the American Statistical Association
Vol. 93, No. 443 (Sep., 1998), pp. 1045-1054
Stable URL: http://www.jstor.org/stable/2669848
Page Count: 10
Preview not available
Markov chain Monte Carlo (MCMC) is used for evaluating expectations of functions of interest under a target distribution π. This is done by calculating averages over the sample path of a Markov chain having π as its stationary distribution. For computational efficiency, the Markov chain should be rapidly mixing. This sometimes can be achieved only by careful design of the transition kernel of the chain, on the basis of a detailed preliminary exploratory analysis of π. An alternative approach might be to allow the transition kernel to adapt whenever new features of π are encountered during the MCMC run. However, if such adaptation occurs infinitely often, then the stationary distribution of the chain may be disturbed. We describe a framework, based on the concept of Markov chain regeneration, which allows adaptation to occur infinitely often but does not disturb the stationary distribution of the chain or the consistency of sample path averages.
Journal of the American Statistical Association © 1998 American Statistical Association