Results 1  10
of
16
General state space Markov chains and MCMC algorithm
 PROBABILITY SURVEYS
, 2004
"... This paper surveys various results about Markov chains on general (noncountable) state spaces. It begins with an introduction to Markov chain Monte Carlo (MCMC) algorithms, which provide the motivation and context for the theory which follows. Then, sufficient conditions for geometric and uniform e ..."
Abstract

Cited by 177 (35 self)
 Add to MetaCart
This paper surveys various results about Markov chains on general (noncountable) state spaces. It begins with an introduction to Markov chain Monte Carlo (MCMC) algorithms, which provide the motivation and context for the theory which follows. Then, sufficient conditions for geometric and uniform ergodicity are presented, along with quantitative bounds on the rate of convergence to stationarity. Many of these results are proved using direct coupling constructions based on minorisation and drift conditions. Necessary and sufficient conditions for Central Limit Theorems (CLTs) are also presented, in some cases proved via the Poisson Equation or direct regeneration constructions. Finally, optimal scaling and weak convergence results for MetropolisHastings algorithms are discussed. None of the results presented is new, though many of the proofs are. We also describe some Open Problems.
Rates of Convergence for Gibbs Sampling for Variance Component Models
 Ann. Stat
, 1991
"... This paper analyzes the Gibbs sampler applied to a standard variance component model, and considers the question of how many iterations are required for convergence. It is proved that for K location parameters, with J observations each, the number of iterations required for convergence (for large K ..."
Abstract

Cited by 40 (10 self)
 Add to MetaCart
(Show Context)
This paper analyzes the Gibbs sampler applied to a standard variance component model, and considers the question of how many iterations are required for convergence. It is proved that for K location parameters, with J observations each, the number of iterations required for convergence (for large K and J) is a constant times
Parallel computing and Monte Carlo algorithms
, 1999
"... We argue that Monte Carlo algorithms are ideally suited to parallel computing, and that "parallel Monte Carlo" should be more widely used. We consider a number of issues that arise, including dealing with slow or unreliable computers. We also discuss the possibilities of parallel Markov ch ..."
Abstract

Cited by 38 (0 self)
 Add to MetaCart
We argue that Monte Carlo algorithms are ideally suited to parallel computing, and that "parallel Monte Carlo" should be more widely used. We consider a number of issues that arise, including dealing with slow or unreliable computers. We also discuss the possibilities of parallel Markov chain Monte Carlo. We illustrate our results with actual computer experiments.
Possible biases induced by MCMC convergence diagnostics
, 1997
"... This paper is organised as follows. In Section 2, we present an oversimplified version of a convergence diagnostic, and study analytically its performance on certain simple Markov chains. We restrict ourselves primarily to chains which in fact produce i.i.d. samples from ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
(Show Context)
This paper is organised as follows. In Section 2, we present an oversimplified version of a convergence diagnostic, and study analytically its performance on certain simple Markov chains. We restrict ourselves primarily to chains which in fact produce i.i.d. samples from
On the use of auxiliary variables in Markov chain Monte Carlo sampling
 Scandinavian Journal of Statistics
, 1997
"... We study the slice sampler, a method of constructing a reversible Markov chain with a specified invariant distribution. Given an independence MetropolisHastings algorithm it is always possible to construct a slice sampler that dominates it in the Peskun sense. This means that the resulting Mark ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
(Show Context)
We study the slice sampler, a method of constructing a reversible Markov chain with a specified invariant distribution. Given an independence MetropolisHastings algorithm it is always possible to construct a slice sampler that dominates it in the Peskun sense. This means that the resulting Markov chain produces estimates with a smaller asymptotic variance. Furthermore the slice sampler has a smaller secondlargest eigenvalue than the corresponding independence MetropolisHastings algorithm. This ensures faster convergence to the distribution of interest. A sufficient condition for uniform ergodicity of the slice sampler is given and an upper bound for the rate of convergence to stationarity is provided. Keywords: Auxiliary variables, Slice sampler, Peskun ordering, MetropolisHastings algorithm, Uniform ergodicity. 1 Introduction The slice sampler is a method of constructing a reversible Markov transition kernel with a given invariant distribution. Auxiliary variables ar...
Sufficient Conditions for Torpid Mixing of Parallel and Simulated Tempering
"... We obtain upper bounds on the spectral gap of Markov chains constructed by parallel and simulated tempering, and provide a set of sufficient conditions for torpid mixing of both techniques. Combined with the results of [22], these results yield a twosided bound on the spectral gap of these algorith ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
We obtain upper bounds on the spectral gap of Markov chains constructed by parallel and simulated tempering, and provide a set of sufficient conditions for torpid mixing of both techniques. Combined with the results of [22], these results yield a twosided bound on the spectral gap of these algorithms. We identify a persistence property of the target distribution, and show that it can lead unexpectedly to slow mixing that commonly used convergence diagnostics will fail to detect. For a multimodal distribution, the persistence is a measure of how “spiky”, or tall and narrow, one peak is relative to the other peaks of the distribution. We show that this persistence phenomenon can be used to explain the torpid mixing of parallel and simulated tempering on the ferromagnetic meanfield Potts model shown previously. We also illustrate how it causes torpid mixing of tempering on a mixture of normal distributions with unequal covariances in RM, a previously unknown result with relevance to statistical inference problems. More generally, anytime a multimodal distribution includes both very narrow and very wide peaks of comparable probability mass, parallel and simulated tempering are shown to mix slowly.
Ordering, Slicing And Splitting Monte Carlo Markov Chains
, 1998
"... Markov chain Monte Carlo is a method of approximating the integral of a function f with respect to a distribution ß. A Markov chain that has ß as its stationary distribution is simulated producing samples X 1 ; X 2 ; : : : . The integral is approximated by taking the average of f(X n ) over the sam ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Markov chain Monte Carlo is a method of approximating the integral of a function f with respect to a distribution ß. A Markov chain that has ß as its stationary distribution is simulated producing samples X 1 ; X 2 ; : : : . The integral is approximated by taking the average of f(X n ) over the sample path. The standard way to construct such Markov chains is the MetropolisHastings algorithm. The class P of all Markov chains having ß as their unique stationary distribution is very large, so it is important to have criteria telling when one chain performs better than another. The Peskun ordering is a partial ordering on P. If two Markov chains are Peskun ordered, then the better chain has smaller variance in the central limit theorem for every function f that has a variance. Peskun ordering is sufficient for this but not necessary. We study the implications of the Peskun ordering both in finite and general state spaces. Unfortunately there are many MetropolisHastings samplers that are...
Theoretical rates of convergence for Markov chain Monte Carlo
 In Proceedings of Interface '94
, 1994
"... . We present a general method for proving rigorous, a priori bounds on the number of iterations required to achieve convergence of Markov chain Monte Carlo. We describe bounds for specific models of the Gibbs sampler, which have been obtained from the general method. We discuss possibilities for obt ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
. We present a general method for proving rigorous, a priori bounds on the number of iterations required to achieve convergence of Markov chain Monte Carlo. We describe bounds for specific models of the Gibbs sampler, which have been obtained from the general method. We discuss possibilities for obtaining bounds more generally. 1. Introduction. Markov chain Monte Carlo techniques, including the MetropolisHastings algorithm (Metropolis et al., 1953; Hastings, 1970), data augmentation (Tanner and Wong, 1986), and the Gibbs sampler (Geman and Geman, 1984; Gelfand and Smith, 1990) have become very popular in recent years as a way of generating a sample from complicated probability distributions (such as posterior distributions in Bayesian inference problems). A fundamental issue regarding such techniques is their convergence properties, specifically whether or not the algorithm will converge to the correct distribution, and if so how quickly. Many general convergence results (e.g. Tierne...