Results 11  20
of
67
Estimation of Bayes Factors in a Class of Hierarchical Random Effects Models using a Geometrically Ergodic MCMC Algorithm
"... We consider a Bayesian random effects model that is commonly used in metaanalysis, in which the random effects have a t distribution, with degrees of freedom parameter to be estimated. We develop a Markov chain Monte Carlo algorithm for estimating the posterior distribution in this model, and estab ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
We consider a Bayesian random effects model that is commonly used in metaanalysis, in which the random effects have a t distribution, with degrees of freedom parameter to be estimated. We develop a Markov chain Monte Carlo algorithm for estimating the posterior distribution in this model, and establish geometric convergence of the algorithm. The geometric convergence rate has important theoretical and practical ramifications. Indeed, it implies that, under standard second moment conditions, the ergodic averages used to estimate posterior quantities of interest satisfy central limit theorems. Moreover, it guarantees the consistency of a batch means estimate of the asymptotic variance in the CLT, which in turn allows for the construction of asymptotically valid standard errors. We show how our Markov chain can be used, in conjunction with an importance sampling method, to carry out an empirical Bayes approach for estimating the degrees of freedom parameter. To illustrate our methodology we consider a metaanalysis of studies that link intake of nonsteroidal antiinflammatory drugs to a reduction in colon cancer risk, in which some of the studies are outliers. To model the distribution of the study effects we consider the family of t distributions, as well as a family
Analysis of MCMC algorithms for Bayesian linear regression with Laplace errors
, 2013
"... Let π denote the intractable posterior density that results when the standard default prior is placed on the parameters in a linear regression model with iid Laplace errors. We analyze the Markov chains underlying two different Markov chain Monte Carlo algorithms for exploring π. In particular, it i ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Let π denote the intractable posterior density that results when the standard default prior is placed on the parameters in a linear regression model with iid Laplace errors. We analyze the Markov chains underlying two different Markov chain Monte Carlo algorithms for exploring π. In particular, it is shown that the Markov operators associated with the data augmentation (DA) algorithm and a sandwich variant are both traceclass. Consequently, both Markov chains are geometrically ergodic. It is also established that for each i ∈ {1, 2, 3,...}, the ith largest eigenvalue of the sandwich operator is less than or equal to the corresponding eigenvalue of the DA operator. It follows that the sandwich algorithm converges at least as fast as the DA algorithm. AMS 2000 subject classifications. Primary 60J27; secondary 62F15 Abbreviated title. MCMC algorithms for Bayesian linear regression
Markov Chain Monte Carlo Estimation of Quantiles
, 2013
"... We consider quantile estimation using Markov chain Monte Carlo and establish conditions under which the sampling distribution of the Monte Carlo error is approximately Normal. Further, we investigate techniques to estimate the associated asymptotic variance, which enables construction of an asymptot ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
We consider quantile estimation using Markov chain Monte Carlo and establish conditions under which the sampling distribution of the Monte Carlo error is approximately Normal. Further, we investigate techniques to estimate the associated asymptotic variance, which enables construction of an asymptotically valid interval estimator. Finally, we explore the finite sample properties of these methods through examples and provide some recommendations to practitioners. 1
Relative fixedwidth stopping rules for Markov chain Monte Carlo simulations
, 2013
"... Markov chain Monte Carlo (MCMC) simulations are commonly employed for estimating features of a target distribution, particularly for Bayesian inference. A fundamental challenge is determining when these simulations should stop. We consider a sequential stopping rule that terminates the simulation ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Markov chain Monte Carlo (MCMC) simulations are commonly employed for estimating features of a target distribution, particularly for Bayesian inference. A fundamental challenge is determining when these simulations should stop. We consider a sequential stopping rule that terminates the simulation when the width of a confidence interval is sufficiently small relative to the size of the target parameter. Specifically, we propose relative magnitude and relative standard deviation stopping rules in the context of MCMC. In each setting, we develop conditions to ensure the simulation will terminate with probability one and the resulting confidence intervals will have the proper coverage probability. Our results are applicable in such MCMC estimation settings as expectation, quantile, or simultaneous multivariate estimation. We investigate the finite sample properties through a variety of examples, and provide some recommendations to practitioners.
Computer Model Calibration with Multivariate Spatial Output: A Case Study
, 2010
"... Computer model calibration involves combining information from simulations of a complex computer model with physical observations of the process being simulated by the model. Increasingly, computer model output is in the form of multiple spatial fields, particularly in climate science. We study a si ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Computer model calibration involves combining information from simulations of a complex computer model with physical observations of the process being simulated by the model. Increasingly, computer model output is in the form of multiple spatial fields, particularly in climate science. We study a simple and effective approach for computer model calibration with multivariate spatial data. We demonstrate the application of this approach to the problem of inferring parameters in a climate model. We find that combining information from multiple spatial fields results in sharper posterior inference than obtained from a single spatial field. In addition, we investigate the effects of including a model discrepancy term and compare the use of a plugin versus a fully Bayesian approach for accounting for emulator variances. We find that usually, although not always, inclusion of the model discrepancy term results in more accurate and sharper inference of the calibration parameter, and estimating emulator spatial variances in a fully Bayesian model results in wider posterior distributions. 1 1
Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors
"... We propose a new method for regression using a parsimonious and scientifically interpretable representation of functional predictors. Our approach is designed for data that exhibit features such as spikes, dips, and plateaus whose frequency, location, size, and shape varies stochastically across sub ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We propose a new method for regression using a parsimonious and scientifically interpretable representation of functional predictors. Our approach is designed for data that exhibit features such as spikes, dips, and plateaus whose frequency, location, size, and shape varies stochastically across subjects. We propose Bayesian inference of the joint functional and exposure models, and give a method for efficient computation. We contrast our approach with existing stateoftheart methods for regression with functional predictors, and show that our method is more effective and efficient for data that include features occurring at varying locations. We apply our methodology to a large and complex dataset from the Sleep Heart Health Study, to quantify the association between sleep characteristics and health outcomes.
Variableatatime implementations of MetropolisHastings
, 2009
"... It is common practice in Markov chain Monte Carlo to update a highdimensional chain one variable (or subblock of variables) at a time, rather than conduct a single block update. While this modification can make the choice of proposal easier, the theoretical convergence properties of the associated ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
It is common practice in Markov chain Monte Carlo to update a highdimensional chain one variable (or subblock of variables) at a time, rather than conduct a single block update. While this modification can make the choice of proposal easier, the theoretical convergence properties of the associated Markov chain have received limited attention. We present conditions under which the chain converges uniformly to its stationary distribution at a geometric rate. Also, we develop a recipe for performing regenerative simulation in this setting and demonstrate its application for estimating Markov chain Monte Carlo standard errors. In both our investigation of convergence rates and in Monte Carlo standard error estimation we pay particular attention to the case with stateindependent componentwise proposals. We illustrate our results in two examples, a toy Bayesian inference problem and a practically relevant example involving maximum likelihood estimation for a generalized linear mixed model. 1 1
Geometric ergodicity of the Gibbs sampler for Bayesian quantile regression
 Journal of Multivariate Analysis
, 2012
"... Consider the quantile regression model Y = Xβ + σɛ where the components of ɛ are iid errors from the asymmetric Laplace distribution with rth quantile equal to 0, where r ∈ (0, 1) is fixed. Kozumi and Kobayashi (2011) introduced a Gibbs sampler that can be used to explore the intractable posterior d ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Consider the quantile regression model Y = Xβ + σɛ where the components of ɛ are iid errors from the asymmetric Laplace distribution with rth quantile equal to 0, where r ∈ (0, 1) is fixed. Kozumi and Kobayashi (2011) introduced a Gibbs sampler that can be used to explore the intractable posterior density that results when the quantile regression likelihood is combined with the usual normal/inverse gamma prior for (β, σ). In this paper, the Markov chain underlying Kozumi and Kobayashi’s (2011) algorithm is shown to converge at a geometric rate. No assumptions are made about the dimension of X, so the result still holds in the “large p, small n ” case. 1
Convergence of Conditional MetropolisHastings Samplers
, 2013
"... We consider Markov chain Monte Carlo algorithms which combine Gibbs updates with MetropolisHastings updates, resulting in a conditional MetropolisHastings sampler (CMH). We develop conditions under which the CMH will be geometrically or uniformly ergodic. We illustrate our results by analysing a C ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We consider Markov chain Monte Carlo algorithms which combine Gibbs updates with MetropolisHastings updates, resulting in a conditional MetropolisHastings sampler (CMH). We develop conditions under which the CMH will be geometrically or uniformly ergodic. We illustrate our results by analysing a CMH used for drawing Bayesian inferences about the entire sample path of a diffusion process, based only upon discrete observations.
Honest Importance Sampling with Multiple Markov Chains
"... Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, pi1, is used to estimate an expectation with respect to another, pi. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, pi1, is used to estimate an expectation with respect to another, pi. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from pi1 is replaced by a Harris ergodic Markov chain with invariant density pi1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, pi1,..., pik, are available. We construct multiplechain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMCbased importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns oneway random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.