Results 1  10
of
11
Spatial Bayesian Variable Selection Models on Functional Magnetic Resonance Imaging TimeSeries Data
, 2011
"... One of the major objectives of fMRI (functional magnetic resonance imaging) studies is to determine subjectspecific areas of increased blood oxygenation level dependent (BOLD) signal contrast in response to a stimulus or task, and hence to infer regional neuronal activity. We posit and investigate ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
One of the major objectives of fMRI (functional magnetic resonance imaging) studies is to determine subjectspecific areas of increased blood oxygenation level dependent (BOLD) signal contrast in response to a stimulus or task, and hence to infer regional neuronal activity. We posit and investigate a Bayesian approach that incorporates spatial dependence in the image and allows for the taskrelated change in the BOLD signal to change dynamically over the scanning session. In this way, our model accounts for potential learning effects, in addition to other mechanisms of temporal drift in taskrelated signals. However, using the posterior for inference requires Markov chain Monte Carlo (MCMC) methods. We study the properties of the model and the MCMC algorithms through their performance on simulated and real data sets. 1
Relative fixedwidth stopping rules for Markov chain Monte Carlo simulations
, 2013
"... Markov chain Monte Carlo (MCMC) simulations are commonly employed for estimating features of a target distribution, particularly for Bayesian inference. A fundamental challenge is determining when these simulations should stop. We consider a sequential stopping rule that terminates the simulation ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Markov chain Monte Carlo (MCMC) simulations are commonly employed for estimating features of a target distribution, particularly for Bayesian inference. A fundamental challenge is determining when these simulations should stop. We consider a sequential stopping rule that terminates the simulation when the width of a confidence interval is sufficiently small relative to the size of the target parameter. Specifically, we propose relative magnitude and relative standard deviation stopping rules in the context of MCMC. In each setting, we develop conditions to ensure the simulation will terminate with probability one and the resulting confidence intervals will have the proper coverage probability. Our results are applicable in such MCMC estimation settings as expectation, quantile, or simultaneous multivariate estimation. We investigate the finite sample properties through a variety of examples, and provide some recommendations to practitioners.
Convergence of Conditional MetropolisHastings Samplers
, 2013
"... We consider Markov chain Monte Carlo algorithms which combine Gibbs updates with MetropolisHastings updates, resulting in a conditional MetropolisHastings sampler (CMH). We develop conditions under which the CMH will be geometrically or uniformly ergodic. We illustrate our results by analysing a C ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We consider Markov chain Monte Carlo algorithms which combine Gibbs updates with MetropolisHastings updates, resulting in a conditional MetropolisHastings sampler (CMH). We develop conditions under which the CMH will be geometrically or uniformly ergodic. We illustrate our results by analysing a CMH used for drawing Bayesian inferences about the entire sample path of a diffusion process, based only upon discrete observations.
Markov chain Monte Carlo with linchpin variables
, 2014
"... Many posteriors can be factored into a product of a conditional density which is easy to sample directly and a marginal density. If it is possible to make a draw from the marginal, then a simple sequential sampling algorithm can be used to make a perfect draw from the joint target density. When the ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Many posteriors can be factored into a product of a conditional density which is easy to sample directly and a marginal density. If it is possible to make a draw from the marginal, then a simple sequential sampling algorithm can be used to make a perfect draw from the joint target density. When the marginal is difficult to sample from we propose to use a MetropolisHastings step on the marginal followed by a draw from the conditional distribution. We show that the resulting Markov chain, called a linchpin variable sampler, is reversible and that its convergence rate is the same as that of the subchain where the MetropolisHastings step is being performed. We use this to construct uniformly ergodic linchpin variable samplers for two versions of a Bayesian linear mixed model and a Bayesian probit regression model.
On the Geometric Ergodicity of Twovariable Gibbs Samplers
"... Abstract A Markov chain is geometrically ergodic if it converges to its invariant distribution at a geometric rate in total variation norm. We study geometric ergodicity of deterministic and random scan versions of the twovariable Gibbs sampler. We give a sufficient condition which simultaneously ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract A Markov chain is geometrically ergodic if it converges to its invariant distribution at a geometric rate in total variation norm. We study geometric ergodicity of deterministic and random scan versions of the twovariable Gibbs sampler. We give a sufficient condition which simultaneously guarantees both versions are geometrically ergodic. We also develop a method for simultaneously establishing that both versions are subgeometrically ergodic. These general results allow us to characterize the convergence rate of twovariable Gibbs samplers in a particular family of discrete bivariate distributions.
On Convergence Properties of the Monte
"... and Rubin, 1977) is a popular method for computing maximum likelihood estimates (MLEs) in problems with missing data. Each iteration of the algorithm formally consists of an Estep: evaluate the expected completedata loglikelihood given the observed data, with expectation taken at current parame ..."
Abstract
 Add to MetaCart
and Rubin, 1977) is a popular method for computing maximum likelihood estimates (MLEs) in problems with missing data. Each iteration of the algorithm formally consists of an Estep: evaluate the expected completedata loglikelihood given the observed data, with expectation taken at current parameter estimate; and an Mstep: maximize the resulting expression to find the updated estimate. Conditions that guarantee convergence of the EM sequence to a unique MLE were found by Boyles (1983) and Wu (1983). In complicated models for highdimensional data, it is common to encounter an intractable integral in the Estep. The Monte Carlo EM algorithm of Wei and Tanner (1990) works around this difficulty by maximizing instead a Monte Carlo approximation to the appropriate conditional expectation. Convergence properties of Monte Carlo EM have been studied, most notably, by Chan and Ledolter (1995) and Fort and Moulines (2003). The goal of this review paper is to provide an accessible but rigorous introduction to the convergence properties of EM and Monte Carlo EM. No previous knowledge of the EM algorithm is assumed. We demonstrate the implementation of EM and Monte Carlo EM in two simple but realistic examples. We show that if the EM algorithm converges it converges to a stationary point of the likelihood, and that the rate of convergence is linear at best. For Monte Carlo EM we present a readable proof of the main result of Chan and Ledolter (1995), and state without proof the conclusions of Fort and Moulines (2003). An important practical implication of Fort and Moulines’s (2003) result relates to the determination of Monte Carlo sample sizes in MCEM; we provide a brief review of the literature (Booth and Hobert, 1999; Caffo, Jank and Jones, 2005) on that problem.
© Applied Probability Trust 2014 CONVERGENCE OF CONDITIONAL METROPOLIS–HASTINGS SAMPLERS
"... We consider Markov chain Monte Carlo algorithms which combine Gibbs updates with Metropolis–Hastings updates, resulting in a conditional Metropolis–Hastings sampler (CMH sampler). We develop conditions under which the CMH sampler will be geometrically or uniformly ergodic. We illustrate our results ..."
Abstract
 Add to MetaCart
(Show Context)
We consider Markov chain Monte Carlo algorithms which combine Gibbs updates with Metropolis–Hastings updates, resulting in a conditional Metropolis–Hastings sampler (CMH sampler). We develop conditions under which the CMH sampler will be geometrically or uniformly ergodic. We illustrate our results by analysing aCMHsampler used for drawing Bayesian inferences about the entire sample path of a diffusion process, based only upon discrete observations. Keywords:Markov chainMonte Carlo algorithm; independence sampler; Gibbs sampler; geometric ergodicity; convergence rate
Geometric Ergodicity of Random Scan Gibbs Samplers for Hierarchical OneWay Random Effects Models
, 2014
"... We consider two Bayesian hierarchical oneway random effects models and establish geometric ergodicity of the corresponding random scan Gibbs samplers. Geometric ergodicity, along with a moment condition, guarantees a central limit theorem for sample means and quantiles. In addition, it ensures the ..."
Abstract
 Add to MetaCart
(Show Context)
We consider two Bayesian hierarchical oneway random effects models and establish geometric ergodicity of the corresponding random scan Gibbs samplers. Geometric ergodicity, along with a moment condition, guarantees a central limit theorem for sample means and quantiles. In addition, it ensures the consistency of various methods for estimating the variance in the asymptotic normal distribution. Thus our results make available the tools for practitioners to be as confident in inferences based on the observations from the random scan Gibbs sampler as they would be with inferences based on random samples from the posterior. ∗Research supported by the the National Science Foundation and the National Institutes for Health. 1