Results 1  10
of
82
FixedWidth Output Analysis for Markov Chain Monte Carlo
, 2005
"... Markov chain Monte Carlo is a method of producing a correlated sample in order to estimate features of a target distribution via ergodic averages. A fundamental question is when should sampling stop? That is, when are the ergodic averages good estimates of the desired quantities? We consider a metho ..."
Abstract

Cited by 93 (30 self)
 Add to MetaCart
Markov chain Monte Carlo is a method of producing a correlated sample in order to estimate features of a target distribution via ergodic averages. A fundamental question is when should sampling stop? That is, when are the ergodic averages good estimates of the desired quantities? We consider a method that stops the simulation when the width of a confidence interval based on an ergodic average is less than a userspecified value. Hence calculating a Monte Carlo standard error is a critical step in assessing the simulation output. We consider the regenerative simulation and batch means methods of estimating the variance of the asymptotic normal distribution. We give sufficient conditions for the strong consistency of both methods and investigate their finite sample properties in a variety of examples.
Batch Means and Spectral Variance Estimation in Markov Chain Monte Carlo
, 2009
"... Calculating a Monte Carlo standard error (MCSE) is an important step in the statistical analysis of the simulation output obtained from a Markov chain Monte Carlo experiment. An MCSE is usually based on an estimate of the variance of the asymptotic normal distribution. We consider spectral and batch ..."
Abstract

Cited by 27 (9 self)
 Add to MetaCart
Calculating a Monte Carlo standard error (MCSE) is an important step in the statistical analysis of the simulation output obtained from a Markov chain Monte Carlo experiment. An MCSE is usually based on an estimate of the variance of the asymptotic normal distribution. We consider spectral and batch means methods for estimating this variance. In particular, we establish conditions which guarantee that these estimators are strongly consistent as the simulation effort increases. In addition, for the batch means and overlapping batch means methods we establish conditions ensuring consistency in the meansquare sense which in turn allows us to calculate the optimal batch size up to a constant of proportionality. Finally, we examine the empirical finitesample properties of spectral variance and batch means estimators and provide recommendations for practitioners.
Beyond random walk and metropolishastings samplers: Why you should not backtrack for unbiased graph sampling
, 2012
"... ar ..."
(Show Context)
Variance Bounding Markov Chains
, 2008
"... We introduce a new property of Markov chains, called variance bounding. We prove that, for reversible chains at least, variance bounding is weaker than, but closely related to, geometric ergodicity. Furthermore, variance bounding is equivalent to the existence of usual central limit theorems for all ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
We introduce a new property of Markov chains, called variance bounding. We prove that, for reversible chains at least, variance bounding is weaker than, but closely related to, geometric ergodicity. Furthermore, variance bounding is equivalent to the existence of usual central limit theorems for all L² functionals. Also, variance bounding (unlike geometric ergodicity) is preserved under the Peskun order. We close with some applications to Metropolis–Hastings algorithms.
Using a Markov chain to construct a tractable approximation of an intractable probability distribution
 SCANDINAVIAN JOURNAL OF STATISTICS
, 2005
"... ..."
Optimal Proposal Distributions and Adaptive MCMC
, 2008
"... We review recent work concerning optimal proposal scalings for MetropolisHastings MCMC algorithms, and adaptive MCMC algorithms for trying to improve the algorithm on the fly. ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
We review recent work concerning optimal proposal scalings for MetropolisHastings MCMC algorithms, and adaptive MCMC algorithms for trying to improve the algorithm on the fly.
Gibbs sampling for a Bayesian hierarchical general linear model
 ELECTRONIC J. STATIST
, 2008
"... We consider twocomponent block Gibbs sampling for a Bayesian hierarchical version of the normal theory general linear model. This model is practically relevant in the sense that it is general enough to have many applications and in that it is not straightforward to sample directly from the correspo ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
We consider twocomponent block Gibbs sampling for a Bayesian hierarchical version of the normal theory general linear model. This model is practically relevant in the sense that it is general enough to have many applications and in that it is not straightforward to sample directly from the corresponding posterior distribution. There are two possible orders in which to update the components of our block Gibbs sampler. For both update orders, drift and minorization conditions are constructed for the corresponding Markov chains. Most importantly, these results establish geometric ergodicity for the block Gibbs sampler. We also construct a general minorization condition and use it to investigate the applicability of regenerative simulation techniques for constructing valid Monte Carlo standard errors.
Limit theorems for stationary Markov processes with L²spectral gap
, 2012
"... Let (Xt,Yt)t∈T be a discrete or continuoustime Markov process with state space X ×R d where X is an arbitrary measurable set. Its transition semigroup is assumed to be additive with respect to the second component, i.e. (Xt,Yt)t∈T is assumed to be a Markov additive process. In particular, this impl ..."
Abstract

Cited by 8 (8 self)
 Add to MetaCart
(Show Context)
Let (Xt,Yt)t∈T be a discrete or continuoustime Markov process with state space X ×R d where X is an arbitrary measurable set. Its transition semigroup is assumed to be additive with respect to the second component, i.e. (Xt,Yt)t∈T is assumed to be a Markov additive process. In particular, this implies that the first component (Xt)t∈T is also a Markov process. Markov random walks or additive functionals of a Markov process are special instances of Markov additive processes. In this paper, the process (Yt)t∈T is shown to satisfy the following classical limit theorems: (a) the central limit theorem, (b) the local limit theorem, (c) the onedimensional BerryEsseen theorem, (d) the onedimensional firstorder Edgeworth expansion, provided that we have sup t∈(0,1]∩TEπ,0[Yt  α] < ∞ with the expected order α with respect to the independent case (up to some ε> 0 for (c) and (d)). For the statements (b) and (d), a Markov nonlattice condition is also assumed as in the independent case. All the results are derived under the assumption that the Markov process (Xt)t∈T has an invariant probability distribution π, is stationary and has the L 2 (π)spectral gap property (that is, (Xt)t∈N is ρmixing in the discretetime case). The case where (Xt)t∈T is nonstationary is briefly discussed. As an application, we derive a BerryEsseen bound for the Mestimators associated with ρmixing Markov chains.