Results 1 
9 of
9
A Spectral Analytic Comparison of Traceclass Data Augmentation Algorithms and their Sandwich Variants
, 2010
"... Let fX(x) be an intractable probability density. If f(x, y) is a joint density whose xmarginal is fX(x), then f(x, y) can be used to build a data augmentation (DA) algorithm that simulates a Markov chain whose invariant density is fX(x). The move from the current state of the chain, Xn = x, to the ..."
Abstract

Cited by 12 (11 self)
 Add to MetaCart
Let fX(x) be an intractable probability density. If f(x, y) is a joint density whose xmarginal is fX(x), then f(x, y) can be used to build a data augmentation (DA) algorithm that simulates a Markov chain whose invariant density is fX(x). The move from the current state of the chain, Xn = x, to the new state, Xn+1, involves two simulation steps: Draw Y ∼ fY X(·x), call the result y, and then draw Xn+1 ∼ fXY (·y). The sandwich algorithm is a variant that involves an extra step “sandwiched” between the two conditional draws. Let R(y, dy ′) be any Markov transition function that is reversible with respect to the ymarginal, fY (y). The extra step entails drawing Y ′ ∼ R(y, ·), and then using this draw, call it y ′, in place of y in the second step. In this paper, the DA and sandwich algorithms are compared in the case where the joint density, f(x, y), satisfies ∫ ∫ X Y fXY (xy)fY X(yx) dy dx < ∞. This condition implies that the (positive) Markov operator associated with the DA Markov chain is traceclass. It is shown that, without any further assumptions, the sandwich algorithm always converges at least
Analysis of MCMC algorithms for Bayesian linear regression with Laplace errors
, 2013
"... Let π denote the intractable posterior density that results when the standard default prior is placed on the parameters in a linear regression model with iid Laplace errors. We analyze the Markov chains underlying two different Markov chain Monte Carlo algorithms for exploring π. In particular, it i ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Let π denote the intractable posterior density that results when the standard default prior is placed on the parameters in a linear regression model with iid Laplace errors. We analyze the Markov chains underlying two different Markov chain Monte Carlo algorithms for exploring π. In particular, it is shown that the Markov operators associated with the data augmentation (DA) algorithm and a sandwich variant are both traceclass. Consequently, both Markov chains are geometrically ergodic. It is also established that for each i ∈ {1, 2, 3,...}, the ith largest eigenvalue of the sandwich operator is less than or equal to the corresponding eigenvalue of the DA operator. It follows that the sandwich algorithm converges at least as fast as the DA algorithm. AMS 2000 subject classifications. Primary 60J27; secondary 62F15 Abbreviated title. MCMC algorithms for Bayesian linear regression
Spectral properties of MCMC algorithms for Bayesian linear regression with generalized hyperbolic errors
, 2014
"... We study MCMC algorithms for Bayesian analysis of a linear regression model with generalized hyperbolic errors. The Markov operators associated with the standard data augmentation algorithm and a sandwich variant of that algorithm are shown to be traceclass. ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We study MCMC algorithms for Bayesian analysis of a linear regression model with generalized hyperbolic errors. The Markov operators associated with the standard data augmentation algorithm and a sandwich variant of that algorithm are shown to be traceclass.
Spectral Bounds for Certain TwoFactor NonReversible MCMC Algorithms
"... Abstract We prove that the Markov operator corresponding to the twovariable, nonreversible Gibbs sampler has spectrum which is entirely real and nonnegative, thus providing a first step towards the spectral analysis of MCMC algorithms in the nonreversible case. We also provide an extension to M ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract We prove that the Markov operator corresponding to the twovariable, nonreversible Gibbs sampler has spectrum which is entirely real and nonnegative, thus providing a first step towards the spectral analysis of MCMC algorithms in the nonreversible case. We also provide an extension to MetropolisHastings components, and connect the spectrum of an algorithm to the spectrum of its marginal chain.
Disease Diagnosis from Immunoassays with Plate to Plate Variability: A Hierarchical Bayesian Approach
, 2014
"... Abstract The standard methods of diagnosing disease based on antibody microtiter plates are quite crude. Fewmethods create a rigorous underlyingmodel for the antibody levels of populations consisting of a mixture of positive and negative subjects, and fewer make full use of the entirety of the avail ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract The standard methods of diagnosing disease based on antibody microtiter plates are quite crude. Fewmethods create a rigorous underlyingmodel for the antibody levels of populations consisting of a mixture of positive and negative subjects, and fewer make full use of the entirety of the available data for diagnoses. In this paper, Electronic supplementary material The online version of this article (doi:10.1007/s1256101491135) contains supplementary material, which is available to authorized users. O. A. Entine (B)
Trace class Markov chains for Bayesian inference with generalized double Pareto shrinkage priors
"... Abstract: Bayesian shrinkage methods have generated a lot of interest in recent years, especially in the context of highdimensional linear regression. Armagan, Dunson and Lee (2013) propose a Bayesian shrinkage approach using generalized double Pareto priors. They establish several useful propert ..."
Abstract
 Add to MetaCart
Abstract: Bayesian shrinkage methods have generated a lot of interest in recent years, especially in the context of highdimensional linear regression. Armagan, Dunson and Lee (2013) propose a Bayesian shrinkage approach using generalized double Pareto priors. They establish several useful properties of this approach, including the derivation of a tractable threeblock Gibbs sampler to sample from the resulting posterior density. We show that the Markov operator corresponding to this threeblock Gibbs sampler is not HilbertSchmidt. We propose a simpler twoblock Gibbs sampler, and show that the corresponding Markov operator is trace class (and hence HilbertSchmidt). Establishing the trace class property for the proposed twoblock Gibbs sampler has several useful consequences. Firstly, it implies that the corresponding Markov chain is geometrically ergodic, thereby implying the existence of a Markov chain CLT, which in turn enables computation of asymptotic standard errors for Markov chain based estimates of posterior quantities. Secondly, since the proposed Gibbs sampler uses twoblocks, standard recipes in the literature can be used to construct a sandwich Markov chain (by inserting an appropriate extra step) to gain further efficiency and to achieve faster convergence. The trace class property for the twoblock sampler implies that the corresponding sandwich Markov chain is also trace class and thereby geometrically ergodic. Finally, it also guarantees that the sandwich Markov chain is strictly better than the Gibbs sampling chain in the sense that all eigenvalues of the sandwich chain are dominated by the corresponding eigenvalues of the Gibbs sampling chain (with at least one strict domination). Our results demonstrate that a minor change in the structure of a Markov chain can lead to fundamental changes in its theoretical properties. We illustrate the improvement in efficiency and convergence resulting from our proposed Markov chains using simulated and real examples.
DISEASE DIAGNOSIS FROM IMMUNOASSAYS WITH PLATE TO PLATE VARIABILITY
"... The standard methods of diagnosing disease based on antibody microtiter plates are quite crude. Few methods create a rigorous underlying model for the antibody levels of populations consisting of a mixture of positive and negative subjects, and fewer make full use of the entirety of the available da ..."
Abstract
 Add to MetaCart
The standard methods of diagnosing disease based on antibody microtiter plates are quite crude. Few methods create a rigorous underlying model for the antibody levels of populations consisting of a mixture of positive and negative subjects, and fewer make full use of the entirety of the available data for diagnoses. In this paper, we propose a Bayesian hierarchical model that provides a systematic way of pooling data across different plates, and accounts for the subtle sources of variations that occur in the optical densities of typical microtiter data. In addition to our Bayesian method having good frequentist properties, we find that our method outperforms one of the standard crude approaches (the "3SD Rule") under reasonable assumptions,
Improving the Data Augmentation algorithm in the twoblock setup
"... The Data Augmentation (DA) approach to approximate sampling from an intractable probability density fX is based on the construction of a joint density, fX,Y, whose conditional densities, fXY and fY X, can be straightforwardly sampled. However, many applications of the DA algorithm do not fall in ..."
Abstract
 Add to MetaCart
The Data Augmentation (DA) approach to approximate sampling from an intractable probability density fX is based on the construction of a joint density, fX,Y, whose conditional densities, fXY and fY X, can be straightforwardly sampled. However, many applications of the DA algorithm do not fall in this “singleblock ” setup. In these applications, X is partitioned into two components, X = (U, V), in such a way that it is easy to sample from fY X, fU V,Y and fV U,Y. We refer to this alternative version of DA, which is effectively a threevariable Gibbs sampler, as “twoblock ” DA. We develop two methods to improve the performance of the DA algorithm in the twoblock setup. These methods are motivated by the Haar PXDA algorithm, which has been developed in previous literature to improve the performance of the singleblock DA algorithm. The Haar PXDA algorithm, which adds a computationally inexpensive extra step in each iteration of the DA algorithm while preserving the stationary density, has been shown to be optimal among similar techniques. However, as we illustrate, the Haar PXDA algorithm does not lead to the required stationary density fX in the twoblock setup. Our methods incorporate suitable generalizations and modifications to this approach, and work in the twoblock setup. A theoretical comparison of our methods to the twoblock DA algorithm, a much harder task than the singleblock setup due to nonreversibility and structural complexities, is provided. We successfully apply our methods to applications of the twoblock DA algorithm in Bayesian robit regression and Bayesian quantile regression.