Results 1  10
of
27
KERNEL ESTIMATORS OF ASYMPTOTIC VARIANCE FOR ADAPTIVE MARKOV Chain Monte Carlo
 SUBMITTED TO THE ANNALS OF STATISTICS
"... We study the asymptotic behavior of kernel estimators of asymptotic variances (or longrun variances) for a class of adaptive Markov chains. The convergence is studied both in L p and almost surely. The results apply to Markov chains as well and improve on the existing literature by imposing weaker ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
We study the asymptotic behavior of kernel estimators of asymptotic variances (or longrun variances) for a class of adaptive Markov chains. The convergence is studied both in L p and almost surely. The results apply to Markov chains as well and improve on the existing literature by imposing weaker conditions. We illustrate the results with applications to the GARCH(1, 1) Markov model and to an adaptive MCMC algorithm for Bayesian logistic regression.
Nonasymptotic bounds on the estimation error for regenerative MCMC algorithms
, 2009
"... MCMC methods are used in Bayesian statistics not only to sample from posterior distributions but also to estimate expectations. Underlying functions are most often defined on a continuous state space and can be unbounded. We consider a regenerative setting and Monte Carlo estimators based on i.i.d. ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
MCMC methods are used in Bayesian statistics not only to sample from posterior distributions but also to estimate expectations. Underlying functions are most often defined on a continuous state space and can be unbounded. We consider a regenerative setting and Monte Carlo estimators based on i.i.d. blocks of a Markov chain trajectory. The main result is an inequality for the mean square error. We also consider confidence bounds. We first derive the results in terms of the asymptotic variance and then bound the asymptotic variance for both uniformly ergodic and geometrically ergodic Markov chains.
Markov Chain Monte Carlo Estimation of Quantiles
, 2013
"... We consider quantile estimation using Markov chain Monte Carlo and establish conditions under which the sampling distribution of the Monte Carlo error is approximately Normal. Further, we investigate techniques to estimate the associated asymptotic variance, which enables construction of an asymptot ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
We consider quantile estimation using Markov chain Monte Carlo and establish conditions under which the sampling distribution of the Monte Carlo error is approximately Normal. Further, we investigate techniques to estimate the associated asymptotic variance, which enables construction of an asymptotically valid interval estimator. Finally, we explore the finite sample properties of these methods through examples and provide some recommendations to practitioners. 1
Relative fixedwidth stopping rules for Markov chain Monte Carlo simulations
, 2013
"... Markov chain Monte Carlo (MCMC) simulations are commonly employed for estimating features of a target distribution, particularly for Bayesian inference. A fundamental challenge is determining when these simulations should stop. We consider a sequential stopping rule that terminates the simulation ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Markov chain Monte Carlo (MCMC) simulations are commonly employed for estimating features of a target distribution, particularly for Bayesian inference. A fundamental challenge is determining when these simulations should stop. We consider a sequential stopping rule that terminates the simulation when the width of a confidence interval is sufficiently small relative to the size of the target parameter. Specifically, we propose relative magnitude and relative standard deviation stopping rules in the context of MCMC. In each setting, we develop conditions to ensure the simulation will terminate with probability one and the resulting confidence intervals will have the proper coverage probability. Our results are applicable in such MCMC estimation settings as expectation, quantile, or simultaneous multivariate estimation. We investigate the finite sample properties through a variety of examples, and provide some recommendations to practitioners.
Supplement to “Computational approaches for empirical Bayes methods and Bayesian sensitivity analysis
, 2011
"... We consider situations in Bayesian analysis where we have a family of priors νh on the parameter θ, where h varies continuously over a space H, and we deal with two related problems. The first involves sensitivity analysis and is stated as follows. Suppose we fix a function f of θ. How do we efficie ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
We consider situations in Bayesian analysis where we have a family of priors νh on the parameter θ, where h varies continuously over a space H, and we deal with two related problems. The first involves sensitivity analysis and is stated as follows. Suppose we fix a function f of θ. How do we efficiently estimate the posterior expectation of f(θ) simultaneously for all h in H? The second problem is how do we identify subsets of H which give rise to reasonable choices of νh? We assume that we are able to generate Markov chain samples from the posterior for a finite number of the priors, and we develop a methodology, based on a combination of importance sampling and the use of control variates, for dealing with these two problems. The methodology applies very generally, and we show how it applies in particular to a commonly used model for variable selection in Bayesian linear regression, and give an illustration on the US crime data of Vandaele. 1. Introduction. In
Variableatatime implementations of MetropolisHastings
, 2009
"... It is common practice in Markov chain Monte Carlo to update a highdimensional chain one variable (or subblock of variables) at a time, rather than conduct a single block update. While this modification can make the choice of proposal easier, the theoretical convergence properties of the associated ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
It is common practice in Markov chain Monte Carlo to update a highdimensional chain one variable (or subblock of variables) at a time, rather than conduct a single block update. While this modification can make the choice of proposal easier, the theoretical convergence properties of the associated Markov chain have received limited attention. We present conditions under which the chain converges uniformly to its stationary distribution at a geometric rate. Also, we develop a recipe for performing regenerative simulation in this setting and demonstrate its application for estimating Markov chain Monte Carlo standard errors. In both our investigation of convergence rates and in Monte Carlo standard error estimation we pay particular attention to the case with stateindependent componentwise proposals. We illustrate our results in two examples, a toy Bayesian inference problem and a practically relevant example involving maximum likelihood estimation for a generalized linear mixed model. 1 1
Geometric ergodicity of the Gibbs sampler for Bayesian quantile regression
 Journal of Multivariate Analysis
, 2012
"... Consider the quantile regression model Y = Xβ + σɛ where the components of ɛ are iid errors from the asymmetric Laplace distribution with rth quantile equal to 0, where r ∈ (0, 1) is fixed. Kozumi and Kobayashi (2011) introduced a Gibbs sampler that can be used to explore the intractable posterior d ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Consider the quantile regression model Y = Xβ + σɛ where the components of ɛ are iid errors from the asymmetric Laplace distribution with rth quantile equal to 0, where r ∈ (0, 1) is fixed. Kozumi and Kobayashi (2011) introduced a Gibbs sampler that can be used to explore the intractable posterior density that results when the quantile regression likelihood is combined with the usual normal/inverse gamma prior for (β, σ). In this paper, the Markov chain underlying Kozumi and Kobayashi’s (2011) algorithm is shown to converge at a geometric rate. No assumptions are made about the dimension of X, so the result still holds in the “large p, small n ” case. 1
VARIABLE TRANSFORMATION TO OBTAIN GEOMETRIC ERGODICITY IN THE RANDOMWALK METROPOLIS ALGORITHM
"... ar ..."
Markov Chain Monte Carlo Estimation of Quantiles
, 2014
"... We consider quantile estimation using Markov chain Monte Carlo and establish conditions under which the sampling distribution of the Monte Carlo error is approximately Normal. Further, we investigate techniques to estimate the associated asymptotic variance, which enables construction of an asympt ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We consider quantile estimation using Markov chain Monte Carlo and establish conditions under which the sampling distribution of the Monte Carlo error is approximately Normal. Further, we investigate techniques to estimate the associated asymptotic variance, which enables construction of an asymptotically valid interval estimator. Finally, we explore the finite sample properties of these methods through examples and provide some recommendations to practitioners. 1