Results 1  10
of
99
Markov chains for exploring posterior distributions
 Annals of Statistics
, 1994
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 1136 (6 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Markov chain monte carlo convergence diagnostics
 JASA
, 1996
"... A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise ..."
Abstract

Cited by 371 (6 self)
 Add to MetaCart
A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise for the future but currently has yielded relatively little that is of practical use in applied work. Consequently, most MCMC users address the convergence problem by applying diagnostic tools to the output produced by running their samplers. After giving a brief overview of the area, we provide an expository review of thirteen convergence diagnostics, describing the theoretical basis and practical implementation of each. We then compare their performance in two simple models and conclude that all the methods can fail to detect the sorts of convergence failure they were designed to identify. We thus recommend a combination of strategies aimed at evaluating and accelerating MCMC sampler convergence, including applying diagnostic procedures to a small number of parallel chains, monitoring autocorrelations and crosscorrelations, and modifying parameterizations or sampling algorithms appropriately. We emphasize, however, that it is not possible to say with certainty that a finite sample from an MCMC algorithm is representative of an underlying stationary distribution. 1
The simulation smoother for time series models
 BIOMETRIKA (1995), 82,2, PP. 33950
, 1995
"... Recently suggested procedures for simulating from the posterior density of states given a Gaussian state space time series are refined and extended. We introduce and study the simulation smoother, which draws from the multivariate posterior distribution of the disturbances of the model, so avoiding ..."
Abstract

Cited by 215 (17 self)
 Add to MetaCart
Recently suggested procedures for simulating from the posterior density of states given a Gaussian state space time series are refined and extended. We introduce and study the simulation smoother, which draws from the multivariate posterior distribution of the disturbances of the model, so avoiding the degeneracies inherent in state samplers. The technique is important in Gibbs sampling with nonGaussian time series models, and for performing Bayesian analysis of Gaussian time series.
Bayesian methods for hidden Markov models: Recursive computing in the 21st century.
 Journal of the American Statistical Association,
, 2002
"... ..."
The practical implementation of Bayesian model selection,” manuscript available at http://gsbwww.uchicago.edu/fac/robert.mcculloch/research/papers/index.html.
, 2001
"... Abstract In principle, the Bayesian approach to model selection is straightforward. Prior probability distributions are used to describe the uncertainty surrounding all unknowns. After observing the data, the posterior distribution provides a coherent post data summary of the remaining uncertainty ..."
Abstract

Cited by 132 (3 self)
 Add to MetaCart
Abstract In principle, the Bayesian approach to model selection is straightforward. Prior probability distributions are used to describe the uncertainty surrounding all unknowns. After observing the data, the posterior distribution provides a coherent post data summary of the remaining uncertainty which is relevant for model selection. However, the practical implementation of this approach often requires carefully tailored priors and novel posterior calculation methods. In this article, we illustrate some of the fundamental practical issues that arise for two different model selection problems: the variable selection problem for the linear model and the CART model selection problem.
Regeneration in Markov Chain Samplers
, 1994
"... Markov chain sampling has received considerable attention in the recent literature, in particular in the context of Bayesian computation and maximum likelihood estimation. This paper discusses the use of Markov chain splitting, originally developed as a tool for the theoretical analysis of general s ..."
Abstract

Cited by 109 (5 self)
 Add to MetaCart
Markov chain sampling has received considerable attention in the recent literature, in particular in the context of Bayesian computation and maximum likelihood estimation. This paper discusses the use of Markov chain splitting, originally developed as a tool for the theoretical analysis of general state space Markov chains, to introduce regeneration times into Markov chain samplers. This allows the use of regenerative methods for analyzing the output of these samplers, and can also provide a useful diagnostic of the performance of the samplers. The general approach is applied to several different samplers and is illustrated in a number of examples. 1 Introduction In Markov chain Monte Carlo, a distribution ß is examined by obtaining sample paths from a Markov chain constructed to have equilibrium distribution ß. This approach was introduced by Metropolis et al. (1953) and has recently received considerable attention as a method for examining posterior distributions in Bayesian infer...
On the Markov chain central limit theorem. Probability Surveys
, 2004
"... The goal of this mainly expository paper is to describe conditions which guarantee a central limit theorem for functionals of general state space Markov chains with a view towards Markov chain Monte Carlo settings. Thus the focus is on the connections between drift and mixing conditions and their im ..."
Abstract

Cited by 82 (14 self)
 Add to MetaCart
(Show Context)
The goal of this mainly expository paper is to describe conditions which guarantee a central limit theorem for functionals of general state space Markov chains with a view towards Markov chain Monte Carlo settings. Thus the focus is on the connections between drift and mixing conditions and their implications. In particular, we consider three commonly cited central limit theorems and discuss their relationship to classical results for mixing processes. Several motivating examples are given which range from toy onedimensional settings to complicated settings encountered in Markov chain Monte Carlo. 1
Markov Chain Monte Carlo in Conditionally Gaussian State Space Models
 Biometrika
, 1996
"... Introduction Linear Gaussian state space models are used extensively, with unknown parameters usually estimated by maximum likelihood: Wecker & Ansley (1983), Harvey (1989). However, many time series and nonparametric regression applications, such as change point problems, outlier detection and ..."
Abstract

Cited by 75 (4 self)
 Add to MetaCart
Introduction Linear Gaussian state space models are used extensively, with unknown parameters usually estimated by maximum likelihood: Wecker & Ansley (1983), Harvey (1989). However, many time series and nonparametric regression applications, such as change point problems, outlier detection and switching regression, require the full generality of the conditionally Gaussian model: Harrison & Stevens (1976), Shumway & Stoffer (1991), West & Harrison (1989), Gordon & Smith (1990). The presence of a large number of indicator variables makes it difficult to estimate conditionally Gaussian models using maximum likelihood, and a Bayesian approach using Markov chain Monte Carlo appears more tractable. We propose a new sampler, which is used to estimate an unknown function nonparametrically when there are jumps in the function and outliers in the observations; it is also applied to a time series change point problem previously discussed by Gordon & Smith (1990). For the first example th
Renewal theory and computable convergence rates for geometrically ergodic Markov chains
, 2003
"... We give computable bounds on the rate of convergence of the transition probabilities to the stationary distribution for a certain class of geometrically ergodic Markov chains. Our results are different from earlier estimates of Meyn and Tweedie, and from estimates using coupling, although we start f ..."
Abstract

Cited by 72 (1 self)
 Add to MetaCart
(Show Context)
We give computable bounds on the rate of convergence of the transition probabilities to the stationary distribution for a certain class of geometrically ergodic Markov chains. Our results are different from earlier estimates of Meyn and Tweedie, and from estimates using coupling, although we start from essentially the same assumptions of a drift condition toward a “small set. ” The estimates show a noticeable improvement on existing results if the Markov chain is reversible with respect to its stationary distribution, and especially so if the chain is also positive. The method of proof uses the firstentrance– lastexit decomposition, together with new quantitative versions of a result of Kendall from discrete renewal theory. 1. Introduction. Let {Xn:n ≥ 0