Results 1 
9 of
9
Componentwise Markov chain Monte Carlo: Uniform and geometric ergodicity under mixing and composition
, 2011
"... Abstract. It is common practice in Markov chain Monte Carlo to update the simulation one variable (or subblock of variables) at a time, rather than conduct a single fulldimensional update. When it is possible to draw from each fullconditional distribution associated with the target this is just a ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
Abstract. It is common practice in Markov chain Monte Carlo to update the simulation one variable (or subblock of variables) at a time, rather than conduct a single fulldimensional update. When it is possible to draw from each fullconditional distribution associated with the target this is just a Gibbs sampler. Often at least one of the Gibbs updates is replaced with a Metropolis–Hastings step, yielding a Metropolis–HastingswithinGibbs algorithm. Strategies for combining componentwise updates include composition, random sequence and random scans. While these strategies can ease MCMC implementation and produce superior empirical performance compared to fulldimensional updates, the theoretical convergence properties of the associated Markov chains have received limited attention. We present conditions under which some componentwise Markov chains converge to the stationary distribution at a geometric rate. We pay particular attention to the connections between the convergence rates of the various componentwise strategies. This is important since it ensures the existence of tools that an MCMC practitioner can use to be as confident in the simulation results as if they were based on independent and identically distributed samples. We illustrate our results in two examples including a hierarchical linear mixed model and one involving maximum likelihood estimation for mixed models.
Exponential concentration inequalities for additive functionals of Markov chains
, 1201
"... ar ..."
(Show Context)
CONVERGENCE ANALYSIS OF THE GIBBS SAMPLER FOR BAYESIAN GENERAL LINEAR MIXED MODELS WITH IMPROPER PRIORS
"... Bayesian analysis of data from the general linear mixed model is challenging because any nontrivial prior leads to an intractable posterior density. However, if a conditionally conjugate prior density is adopted, then there is a simple Gibbs sampler that can be employed to explore the posterior dens ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
Bayesian analysis of data from the general linear mixed model is challenging because any nontrivial prior leads to an intractable posterior density. However, if a conditionally conjugate prior density is adopted, then there is a simple Gibbs sampler that can be employed to explore the posterior density. A popular default among the conditionally conjugate priors is an improper prior that takes a product form with a flat prior on the regression parameter, and socalled power priors on each of the variance components. In this paper, a convergence rate analysis of the corresponding Gibbs sampler is undertaken. The main result is a simple, easilychecked sufficient condition for geometric ergodicity of the Gibbs–Markov chain. This result is close to the best possible result in the sense that the sufficient condition is only slightly stronger than what is required to ensure posterior propriety. The theory developed in this paper is extremely important from a practical standpoint because it guarantees the existence of central limit theorems that allow for the computation of valid asymptotic standard errors for the estimates computed using the Gibbs sampler.
Geometric Ergodicity of Gibbs Samplers for Bayesian General Linear Mixed Models with Proper Priors
, 2013
"... When a Bayesian version of the general linear mixed model is created by adopting a conditionally conjugate prior distribution, a simple block Gibbs sampler can be employed to explore the resulting intractable posterior density. In this article it is shown that, under mild conditions that nearly alw ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
When a Bayesian version of the general linear mixed model is created by adopting a conditionally conjugate prior distribution, a simple block Gibbs sampler can be employed to explore the resulting intractable posterior density. In this article it is shown that, under mild conditions that nearly always hold in practice, the block Gibbs Markov chain is geometrically ergodic. 1
Convergence Analysis of Block Gibbs Samplers for Bayesian Linear Mixed Models with p> N
, 2015
"... Exploration of the intractable posterior distributions associated with Bayesian versions of the general linear mixed model is often performed using Markov chain Monte Carlo. In particular, if a conditionally conjugate prior is used, then there is a simple twoblock Gibbs sampler available. Roman &a ..."
Abstract
 Add to MetaCart
Exploration of the intractable posterior distributions associated with Bayesian versions of the general linear mixed model is often performed using Markov chain Monte Carlo. In particular, if a conditionally conjugate prior is used, then there is a simple twoblock Gibbs sampler available. Roman & Hobert (2015) showed that, when the priors are proper and the X matrix has full column rank, the Markov chains underlying these Gibbs samplers are nearly always geometrically ergodic. In this paper, Roman & Hobert’s (2015) result is extended by allowing improper priors on the variance components, and, more importantly, by removing all assumptions on the X matrix. So, not only is X allowed to be (column) rank deficient, which provides additional flexibility in parameterizing the fixed effects, it is also allowed to have more columns than rows, which is necessary in the increasingly important situation where p> N. The full rank assumption on X is at the heart of Roman & Hobert’s (2015) proof. Consequently, the extension to unrestricted X requires a substantially different analysis. Key words and phrases. Conditionally conjugate prior; Convergence rate; Geometric drift condition; Markov chain;
© Applied Probability Trust 2014 CONVERGENCE OF CONDITIONAL METROPOLIS–HASTINGS SAMPLERS
"... We consider Markov chain Monte Carlo algorithms which combine Gibbs updates with Metropolis–Hastings updates, resulting in a conditional Metropolis–Hastings sampler (CMH sampler). We develop conditions under which the CMH sampler will be geometrically or uniformly ergodic. We illustrate our results ..."
Abstract
 Add to MetaCart
(Show Context)
We consider Markov chain Monte Carlo algorithms which combine Gibbs updates with Metropolis–Hastings updates, resulting in a conditional Metropolis–Hastings sampler (CMH sampler). We develop conditions under which the CMH sampler will be geometrically or uniformly ergodic. We illustrate our results by analysing aCMHsampler used for drawing Bayesian inferences about the entire sample path of a diffusion process, based only upon discrete observations. Keywords:Markov chainMonte Carlo algorithm; independence sampler; Gibbs sampler; geometric ergodicity; convergence rate