Results 1 
5 of
5
Variance Bounding Markov Chains
, 2008
"... We introduce a new property of Markov chains, called variance bounding. We prove that, for reversible chains at least, variance bounding is weaker than, but closely related to, geometric ergodicity. Furthermore, variance bounding is equivalent to the existence of usual central limit theorems for all ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
We introduce a new property of Markov chains, called variance bounding. We prove that, for reversible chains at least, variance bounding is weaker than, but closely related to, geometric ergodicity. Furthermore, variance bounding is equivalent to the existence of usual central limit theorems for all L² functionals. Also, variance bounding (unlike geometric ergodicity) is preserved under the Peskun order. We close with some applications to Metropolis–Hastings algorithms.
A Spectral Analytic Comparison of Traceclass Data Augmentation Algorithms and their Sandwich Variants
, 2010
"... Let fX(x) be an intractable probability density. If f(x, y) is a joint density whose xmarginal is fX(x), then f(x, y) can be used to build a data augmentation (DA) algorithm that simulates a Markov chain whose invariant density is fX(x). The move from the current state of the chain, Xn = x, to the ..."
Abstract

Cited by 12 (11 self)
 Add to MetaCart
Let fX(x) be an intractable probability density. If f(x, y) is a joint density whose xmarginal is fX(x), then f(x, y) can be used to build a data augmentation (DA) algorithm that simulates a Markov chain whose invariant density is fX(x). The move from the current state of the chain, Xn = x, to the new state, Xn+1, involves two simulation steps: Draw Y ∼ fY X(·x), call the result y, and then draw Xn+1 ∼ fXY (·y). The sandwich algorithm is a variant that involves an extra step “sandwiched” between the two conditional draws. Let R(y, dy ′) be any Markov transition function that is reversible with respect to the ymarginal, fY (y). The extra step entails drawing Y ′ ∼ R(y, ·), and then using this draw, call it y ′, in place of y in the second step. In this paper, the DA and sandwich algorithms are compared in the case where the joint density, f(x, y), satisfies ∫ ∫ X Y fXY (xy)fY X(yx) dy dx < ∞. This condition implies that the (positive) Markov operator associated with the DA Markov chain is traceclass. It is shown that, without any further assumptions, the sandwich algorithm always converges at least
On Monte Carlo methods for Bayesian multivariate regression models with heavytailed errors
 Journal of Multivariate Analysis
"... We consider Bayesian analysis of data from multivariate linear regression models whose errors have a distribution that is a scale mixture of normals. Such models are used to analyze data on financial returns, which are notoriously heavytailed. Let pi denote the intractable posterior density that re ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
We consider Bayesian analysis of data from multivariate linear regression models whose errors have a distribution that is a scale mixture of normals. Such models are used to analyze data on financial returns, which are notoriously heavytailed. Let pi denote the intractable posterior density that results when this regression model is combined with the standard noninformative prior on the unknown regression coefficients and scale matrix of the errors. Roughly speaking, the posterior is proper if and only if n ≥ d + k, where n is the sample size, d is the dimension of the response, and k is number of covariates. We provide a method of making exact draws from pi in the special case where n = d + k, and we study Markov chain Monte Carlo (MCMC) algorithms that can be used to explore pi when n> d+ k. In particular, we show how the Haar PXDA technology studied in Hobert and Marchev (2008) can be used to improve upon Liu’s (1996) data augmentation (DA) algorithm. Indeed, the new algorithm that we introduce is theoretically superior to the DA algorithm, yet equivalent to DA in terms of computational complexity. Moreover, we analyze the convergence rates of these MCMC algorithms in the important special case where the regression errors have a Student’s t distribution. We prove that, under conditions on n, d, k, and the degrees of freedom of the t distribution, both algorithms converge at a geometric rate. These convergence rate results are important from a practical standpoint because geometric ergodicity guarantees the existence of central limit theorems which are essential for the calculation of valid asymptotic standard errors for MCMC based estimates.
Sandwich Algorithms for Bayesian Variable Selection
"... NOTICE: This is the author’s version of a work that was accepted for publication in Computational Statistics and Data Analysis. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected i ..."
Abstract
 Add to MetaCart
(Show Context)
NOTICE: This is the author’s version of a work that was accepted for publication in Computational Statistics and Data Analysis. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright c © 2014, Elsevier. This manuscript version is made available under the CCBYNCND 4.0 license
Discussion of Yu & Meng’s Paper
, 2010
"... We begin by congratulating Professors Yu and Meng on an outstanding paper, and thanking Professor Levine for giving us the opportunity to discuss their work. Our discussion focuses mainly on the GIS & ASIS algorithms. Section 1 concerns the relationship between the GIS and sandwich algorithms. I ..."
Abstract
 Add to MetaCart
We begin by congratulating Professors Yu and Meng on an outstanding paper, and thanking Professor Levine for giving us the opportunity to discuss their work. Our discussion focuses mainly on the GIS & ASIS algorithms. Section 1 concerns the relationship between the GIS and sandwich algorithms. In Section 2, we consider a family of toy GIS algorithms based on the bivariate normal distribution, and show how this family is related to the toy example in Section 2 of Yu & Meng (2011) (hereafter Y&M). Finally, in Section 3, we provide a simple example of a nonreversible GIS algorithm. 1 { DA algorithms} ⊂ { sandwich algorithms} ⊂ { GIS algorithms} Let fX: X → [0, ∞) be an intractable target density, and suppose that f: X×Y → [0, ∞) is a joint density whose xmarginal is the target; i.e., ∫ Y f(x, y) dy = fX(x). If straightforward sampling from the associated conditional densities is possible, then we can use the data augmentation (DA) algorithm to explore fX. Of course, running the algorithm entails alternating between draws from f Y X and f XY, which simulates the Markov chain whose Markov transition density (Mtd) is kDA(x ′ ∫ x) = Y f XY (x ′ y) f Y X(yx) dy. If we denote the DA Markov chain by {Xn} ∞ n=0, then kDA(·x) is simply the conditional density of Xn+1 given that Xn = x. It’s easy to see that kDA(x ′ x) fX(x) is symmetric in (x, x ′), so the DA Markov chain is reversible. We assume throughout that all Markov chains on the target space, X, satisfy the usual regularity conditions: Harris recurrence, irreducibility and aperiodicity.