Results 1 
8 of
8
Componentwise Markov chain Monte Carlo: Uniform and geometric ergodicity under mixing and composition
, 2011
"... Abstract. It is common practice in Markov chain Monte Carlo to update the simulation one variable (or subblock of variables) at a time, rather than conduct a single fulldimensional update. When it is possible to draw from each fullconditional distribution associated with the target this is just a ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
Abstract. It is common practice in Markov chain Monte Carlo to update the simulation one variable (or subblock of variables) at a time, rather than conduct a single fulldimensional update. When it is possible to draw from each fullconditional distribution associated with the target this is just a Gibbs sampler. Often at least one of the Gibbs updates is replaced with a Metropolis–Hastings step, yielding a Metropolis–HastingswithinGibbs algorithm. Strategies for combining componentwise updates include composition, random sequence and random scans. While these strategies can ease MCMC implementation and produce superior empirical performance compared to fulldimensional updates, the theoretical convergence properties of the associated Markov chains have received limited attention. We present conditions under which some componentwise Markov chains converge to the stationary distribution at a geometric rate. We pay particular attention to the connections between the convergence rates of the various componentwise strategies. This is important since it ensures the existence of tools that an MCMC practitioner can use to be as confident in the simulation results as if they were based on independent and identically distributed samples. We illustrate our results in two examples including a hierarchical linear mixed model and one involving maximum likelihood estimation for mixed models.
On reparametrization and the Gibbs sampler
, 2013
"... Gibbs samplers derived under different parametrizations of the target density can have radically different rates of convergence. In this article, we specify conditions under which reparametrization leaves the convergence rate of a Gibbs chain unchanged. An example illustrates how these results can ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Gibbs samplers derived under different parametrizations of the target density can have radically different rates of convergence. In this article, we specify conditions under which reparametrization leaves the convergence rate of a Gibbs chain unchanged. An example illustrates how these results can be exploited in convergence rate analyses. 1
Geometric Ergodicity of Gibbs Samplers for Bayesian General Linear Mixed Models with Proper Priors
, 2013
"... When a Bayesian version of the general linear mixed model is created by adopting a conditionally conjugate prior distribution, a simple block Gibbs sampler can be employed to explore the resulting intractable posterior density. In this article it is shown that, under mild conditions that nearly alw ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
When a Bayesian version of the general linear mixed model is created by adopting a conditionally conjugate prior distribution, a simple block Gibbs sampler can be employed to explore the resulting intractable posterior density. In this article it is shown that, under mild conditions that nearly always hold in practice, the block Gibbs Markov chain is geometrically ergodic. 1
On the Geometric Ergodicity of Twovariable Gibbs Samplers
"... Abstract A Markov chain is geometrically ergodic if it converges to its invariant distribution at a geometric rate in total variation norm. We study geometric ergodicity of deterministic and random scan versions of the twovariable Gibbs sampler. We give a sufficient condition which simultaneously ..."
Abstract
 Add to MetaCart
Abstract A Markov chain is geometrically ergodic if it converges to its invariant distribution at a geometric rate in total variation norm. We study geometric ergodicity of deterministic and random scan versions of the twovariable Gibbs sampler. We give a sufficient condition which simultaneously guarantees both versions are geometrically ergodic. We also develop a method for simultaneously establishing that both versions are subgeometrically ergodic. These general results allow us to characterize the convergence rate of twovariable Gibbs samplers in a particular family of discrete bivariate distributions.
Convergence Analysis of the Data Augmentation Algorithm for Bayesian Linear Regression with NonGaussian Errors
, 2015
"... The errors in a standard linear regression model are iid with common density 1σφ ε ..."
Abstract
 Add to MetaCart
The errors in a standard linear regression model are iid with common density 1σφ ε
Convergence Analysis of Block Gibbs Samplers for Bayesian Linear Mixed Models with p> N
, 2015
"... Exploration of the intractable posterior distributions associated with Bayesian versions of the general linear mixed model is often performed using Markov chain Monte Carlo. In particular, if a conditionally conjugate prior is used, then there is a simple twoblock Gibbs sampler available. Roman &a ..."
Abstract
 Add to MetaCart
Exploration of the intractable posterior distributions associated with Bayesian versions of the general linear mixed model is often performed using Markov chain Monte Carlo. In particular, if a conditionally conjugate prior is used, then there is a simple twoblock Gibbs sampler available. Roman & Hobert (2015) showed that, when the priors are proper and the X matrix has full column rank, the Markov chains underlying these Gibbs samplers are nearly always geometrically ergodic. In this paper, Roman & Hobert’s (2015) result is extended by allowing improper priors on the variance components, and, more importantly, by removing all assumptions on the X matrix. So, not only is X allowed to be (column) rank deficient, which provides additional flexibility in parameterizing the fixed effects, it is also allowed to have more columns than rows, which is necessary in the increasingly important situation where p> N. The full rank assumption on X is at the heart of Roman & Hobert’s (2015) proof. Consequently, the extension to unrestricted X requires a substantially different analysis. Key words and phrases. Conditionally conjugate prior; Convergence rate; Geometric drift condition; Markov chain;