Results 1  10
of
15
General state space Markov chains and MCMC algorithm
 PROBABILITY SURVEYS
, 2004
"... This paper surveys various results about Markov chains on general (noncountable) state spaces. It begins with an introduction to Markov chain Monte Carlo (MCMC) algorithms, which provide the motivation and context for the theory which follows. Then, sufficient conditions for geometric and uniform e ..."
Abstract

Cited by 177 (35 self)
 Add to MetaCart
This paper surveys various results about Markov chains on general (noncountable) state spaces. It begins with an introduction to Markov chain Monte Carlo (MCMC) algorithms, which provide the motivation and context for the theory which follows. Then, sufficient conditions for geometric and uniform ergodicity are presented, along with quantitative bounds on the rate of convergence to stationarity. Many of these results are proved using direct coupling constructions based on minorisation and drift conditions. Necessary and sufficient conditions for Central Limit Theorems (CLTs) are also presented, in some cases proved via the Poisson Equation or direct regeneration constructions. Finally, optimal scaling and weak convergence results for MetropolisHastings algorithms are discussed. None of the results presented is new, though many of the proofs are. We also describe some Open Problems.
Metropolized Independent Sampling with Comparisons to Rejection Sampling and Importance Sampling
, 1996
"... this paper, a special MetropolisHastings type algorithm, Metropolized independent sampling, proposed firstly in Hastings (1970), is studied in full detail. The eigenvalues and eigenvectors of the corresponding Markov chain, as well as a sharp bound for the total variation distance between the nth ..."
Abstract

Cited by 149 (4 self)
 Add to MetaCart
this paper, a special MetropolisHastings type algorithm, Metropolized independent sampling, proposed firstly in Hastings (1970), is studied in full detail. The eigenvalues and eigenvectors of the corresponding Markov chain, as well as a sharp bound for the total variation distance between the nth updated distribution and the target distribution, are provided. Furthermore, the relationship between this scheme, rejection sampling, and importance sampling are studied with emphasizes on their relative efficiencies. It is shown that Metropolized independent sampling is superior to rejection sampling in two aspects: asymptotic efficiency and ease of computation. Key Words: Coupling, Delta method, Eigen analysis, Importance ratio. 1 1 Introduction
Analysis of the Gibbs sampler for a model related to JamesStein estimators
, 1995
"... this paper we investigate the convergence properties of the Gibbs sampler as applied to a particular hierarchical Bayes model. The model is related to JamesStein estimators (James and Stein, 1961; Efron and Morris, 1973, 1975; Morris, 1983). Briefly, JamesStein estimators may be defined as the mea ..."
Abstract

Cited by 39 (16 self)
 Add to MetaCart
this paper we investigate the convergence properties of the Gibbs sampler as applied to a particular hierarchical Bayes model. The model is related to JamesStein estimators (James and Stein, 1961; Efron and Morris, 1973, 1975; Morris, 1983). Briefly, JamesStein estimators may be defined as the mean of a certain empirical Bayes posterior distribution (as discussed in the next section). We consider the problem of using the Gibbs sampler as a way of sampling from a richer posterior distribution, as suggested by Jun Liu (personal communication). Such a technique would eliminate the need to estimate a certain parameter empirically and to provide a "guess" at another one, and would give additional information about the distribution of the parameters involved. We consider, in particular, the convergence properties of this Gibbs sampler. For a certain range of prior distributions, we establish (Section 3) rigorous, numerical, reasonable rates of convergence. The bounds are obtained using the methods of Rosenthal (1995b). We thus rigorously bound the running time for this Gibbs sampler to converge to the posterior distribution, within a specified accuracy (as measured by total variation distance). We provide a general formula for this bound, which is of reasonable size, in terms of the prior distribution and the data. This Gibbs sampler is perhaps the most complicated example to date for which reasonable quantitative convergence rates have been obtained. We apply our bounds to the numerical baseball data of Efron and Morris (1975) and Morris (1983), based on batting averages of baseball players, and show that approximately 140 iterations are sufficient to achieve convergence in this case. For a different range of prior distributions, we use the Submartingale Convergence Theo...
Applications of geometric bounds to the convergence rate of Markov chains on R^n
, 2001
"... Quantitative geometric rates of convergence for reversible Markov chains are closely related to the spectral gap of the corresponding operator, which is hard to calculate for general state spaces. This thesis describes a geometric argument to give different types of bounds for spectral gaps of Marko ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
Quantitative geometric rates of convergence for reversible Markov chains are closely related to the spectral gap of the corresponding operator, which is hard to calculate for general state spaces. This thesis describes a geometric argument to give different types of bounds for spectral gaps of Markov chains on bounded subsets of Rn and to compare the rates of convergence of different Markov chains. We also extend the discretetime results to homogeneous continuoustime reversible Markov processes. The limit path bounds and the limit Cheeger's bounds are introduced. Two quantitative examples of 1dimensional diffusions are studied for the limit Cheeger's bounds and a ndimensional diffusion is studied for the limit path bounds.
Theoretical rates of convergence for Markov chain Monte Carlo
 In Proceedings of Interface '94
, 1994
"... . We present a general method for proving rigorous, a priori bounds on the number of iterations required to achieve convergence of Markov chain Monte Carlo. We describe bounds for specific models of the Gibbs sampler, which have been obtained from the general method. We discuss possibilities for obt ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
. We present a general method for proving rigorous, a priori bounds on the number of iterations required to achieve convergence of Markov chain Monte Carlo. We describe bounds for specific models of the Gibbs sampler, which have been obtained from the general method. We discuss possibilities for obtaining bounds more generally. 1. Introduction. Markov chain Monte Carlo techniques, including the MetropolisHastings algorithm (Metropolis et al., 1953; Hastings, 1970), data augmentation (Tanner and Wong, 1986), and the Gibbs sampler (Geman and Geman, 1984; Gelfand and Smith, 1990) have become very popular in recent years as a way of generating a sample from complicated probability distributions (such as posterior distributions in Bayesian inference problems). A fundamental issue regarding such techniques is their convergence properties, specifically whether or not the algorithm will converge to the correct distribution, and if so how quickly. Many general convergence results (e.g. Tierne...
Complexity Bounds for MCMC via Diffusion Limits by
, 2014
"... Abstract. We connect known results about diffusion limits of Markov chain Monte Carlo (MCMC) algorithms to the Computer Science notion of algorithm complexity. Our main result states that any diffusion limit of a Markov process implies a corresponding complexity bound (in an appropriate metric). We ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We connect known results about diffusion limits of Markov chain Monte Carlo (MCMC) algorithms to the Computer Science notion of algorithm complexity. Our main result states that any diffusion limit of a Markov process implies a corresponding complexity bound (in an appropriate metric). We then combine this result with previouslyknown MCMC diffusion limit results to prove that under appropriate assumptions, the RandomWalk Metropolis (RWM) algorithm in d dimensions takes O(d) iterations to converge to stationarity, while the MetropolisAdjusted Langevin Algorithm (MALA) takes O(d1/3) iterations to converge to stationarity. 1. Introduction. In the computer science literature, algorithms are often analysed in terms of “complexity” bounds. In the Markov chain Monte Carlo (MCMC) literature, algorithms are sometimes understood in terms of diffusion limits. The purpose of this note is to connect these two approaches, and in particular to show that diffusion limits sometimes imply complexity
An Introduction to Markov Chain Monte Carlo
, 2005
"... Theoretical rates of convergence for ..."
(Show Context)