Results 1 
6 of
6
Analysis of the Gibbs sampler for a model related to JamesStein estimators
, 1995
"... this paper we investigate the convergence properties of the Gibbs sampler as applied to a particular hierarchical Bayes model. The model is related to JamesStein estimators (James and Stein, 1961; Efron and Morris, 1973, 1975; Morris, 1983). Briefly, JamesStein estimators may be defined as the mea ..."
Abstract

Cited by 39 (16 self)
 Add to MetaCart
this paper we investigate the convergence properties of the Gibbs sampler as applied to a particular hierarchical Bayes model. The model is related to JamesStein estimators (James and Stein, 1961; Efron and Morris, 1973, 1975; Morris, 1983). Briefly, JamesStein estimators may be defined as the mean of a certain empirical Bayes posterior distribution (as discussed in the next section). We consider the problem of using the Gibbs sampler as a way of sampling from a richer posterior distribution, as suggested by Jun Liu (personal communication). Such a technique would eliminate the need to estimate a certain parameter empirically and to provide a "guess" at another one, and would give additional information about the distribution of the parameters involved. We consider, in particular, the convergence properties of this Gibbs sampler. For a certain range of prior distributions, we establish (Section 3) rigorous, numerical, reasonable rates of convergence. The bounds are obtained using the methods of Rosenthal (1995b). We thus rigorously bound the running time for this Gibbs sampler to converge to the posterior distribution, within a specified accuracy (as measured by total variation distance). We provide a general formula for this bound, which is of reasonable size, in terms of the prior distribution and the data. This Gibbs sampler is perhaps the most complicated example to date for which reasonable quantitative convergence rates have been obtained. We apply our bounds to the numerical baseball data of Efron and Morris (1975) and Morris (1983), based on batting averages of baseball players, and show that approximately 140 iterations are sufficient to achieve convergence in this case. For a different range of prior distributions, we use the Submartingale Convergence Theo...
Theoretical rates of convergence for Markov chain Monte Carlo
 In Proceedings of Interface '94
, 1994
"... . We present a general method for proving rigorous, a priori bounds on the number of iterations required to achieve convergence of Markov chain Monte Carlo. We describe bounds for specific models of the Gibbs sampler, which have been obtained from the general method. We discuss possibilities for obt ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
. We present a general method for proving rigorous, a priori bounds on the number of iterations required to achieve convergence of Markov chain Monte Carlo. We describe bounds for specific models of the Gibbs sampler, which have been obtained from the general method. We discuss possibilities for obtaining bounds more generally. 1. Introduction. Markov chain Monte Carlo techniques, including the MetropolisHastings algorithm (Metropolis et al., 1953; Hastings, 1970), data augmentation (Tanner and Wong, 1986), and the Gibbs sampler (Geman and Geman, 1984; Gelfand and Smith, 1990) have become very popular in recent years as a way of generating a sample from complicated probability distributions (such as posterior distributions in Bayesian inference problems). A fundamental issue regarding such techniques is their convergence properties, specifically whether or not the algorithm will converge to the correct distribution, and if so how quickly. Many general convergence results (e.g. Tierne...
An Introduction to Markov Chain Monte Carlo
, 2005
"... Theoretical rates of convergence for ..."
(Show Context)