• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Markov Chains for Exploring Posterior Distributions. Ann Stat (1994)

by L Tierney
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 1,136
Next 10 →

Non-Uniform Random Variate Generation

by Luc Devroye , 1986
"... This is a survey of the main methods in non-uniform random variate generation, and highlights recent research on the subject. Classical paradigms such as inversion, rejection, guide tables, and transformations are reviewed. We provide information on the expected time complexity of various algorith ..."
Abstract - Cited by 1021 (26 self) - Add to MetaCart
This is a survey of the main methods in non-uniform random variate generation, and highlights recent research on the subject. Classical paradigms such as inversion, rejection, guide tables, and transformations are reviewed. We provide information on the expected time complexity of various algorithms, before addressing modern topics such as indirectly specified distributions, random processes, and Markov chain methods.

Sequential Monte Carlo Methods for Dynamic Systems

by Jun S. Liu, Rong Chen - Journal of the American Statistical Association , 1998
"... A general framework for using Monte Carlo methods in dynamic systems is provided and its wide applications indicated. Under this framework, several currently available techniques are studied and generalized to accommodate more complex features. All of these methods are partial combinations of three ..."
Abstract - Cited by 664 (13 self) - Add to MetaCart
A general framework for using Monte Carlo methods in dynamic systems is provided and its wide applications indicated. Under this framework, several currently available techniques are studied and generalized to accommodate more complex features. All of these methods are partial combinations of three ingredients: importance sampling and resampling, rejection sampling, and Markov chain iterations. We deliver a guideline on how they should be used and under what circumstance each method is most suitable. Through the analysis of differences and connections, we consolidate these methods into a generic algorithm by combining desirable features. In addition, we propose a general use of Rao-Blackwellization to improve performances. Examples from econometrics and engineering are presented to demonstrate the importance of Rao-Blackwellization and to compare different Monte Carlo procedures. Keywords: Blind deconvolution; Bootstrap filter; Gibbs sampling; Hidden Markov model; Kalman filter; Markov...

On Bayesian analysis of mixtures with an unknown number of components

by Sylvia Richardson, Peter J. Green - INSTITUTE OF INTERNATIONAL ECONOMICS PROJECT ON INTERNATIONAL COMPETITION POLICY," COM/DAFFE/CLP/TD(94)42 , 1997
"... ..."
Abstract - Cited by 647 (24 self) - Add to MetaCart
Abstract not found

Bayesian Analysis of Stochastic Volatility Models

by Eric Jacquier, Nicholas G. Polson, Peter E. Rossi , 1994
"... this article is to develop new methods for inference and prediction in a simple class of stochastic volatility models in which logarithm of conditional volatility follows an autoregressive (AR) times series model. Unlike the autoregressive conditional heteroscedasticity (ARCH) and gener- alized ARCH ..."
Abstract - Cited by 601 (26 self) - Add to MetaCart
this article is to develop new methods for inference and prediction in a simple class of stochastic volatility models in which logarithm of conditional volatility follows an autoregressive (AR) times series model. Unlike the autoregressive conditional heteroscedasticity (ARCH) and gener- alized ARCH (GARCH) models [see Bollerslev, Chou, and Kroner (1992) for a survey of ARCH modeling], both the mean and log-volatility equations have separate error terms. The ease of evaluating the ARCH likelihood function and the ability of the ARCH specification to accommodate the timevarying volatility found in many economic time series has fostered an explosion in the use of ARCH models. On the other hand, the likelihood function for stochastic volatility models is difficult to evaluate, and hence these models have had limited empirical application

Stochastic volatility: likelihood inference and comparison with ARCH models

by Sangjoon Kim, Salomon Brothers, Asia Limited, Neil Shephard - Review of Economic Studies , 1998
"... In this paper, Markov chain Monte Carlo sampling methods are exploited to provide a unified, practical likelihood-based framework for the analysis of stochastic volatility models. A highly effective method is developed that samples all the unobserved volatilities at once using an approximating offse ..."
Abstract - Cited by 592 (40 self) - Add to MetaCart
In this paper, Markov chain Monte Carlo sampling methods are exploited to provide a unified, practical likelihood-based framework for the analysis of stochastic volatility models. A highly effective method is developed that samples all the unobserved volatilities at once using an approximating offset mixture model, followed by an importance reweighting procedure. This approach is compared with several alternative methods using real data. The paper also develops simulation-based methods for filtering, likelihood evaluation and model failure diagnostics. The issue of model choice using non-nested likelihood ratios and Bayes factors is also investigated. These methods are used to compare the fit of stochastic volatility and GARCH models. All the procedures are illustrated in detail. 1.

Marginal likelihood from the Gibbs output

by Siddhartha Chib - J. AM. STAT. ASSOC , 1995
"... ..."
Abstract - Cited by 572 (39 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...method (Scott 1992). Also, under regularity conditions, the estimate is simulation consistent; that is, +(O* ly) -+ T(O* y) as G becomes large, almost surely, as a consequence of the ergodic theorem (=-=Tierney 1994-=-). Substituting the estimate of the posterior ordinate into (6) givesChib: Marginal Likelihood from the Gibbs Output the following estimate of the marginal likelihood: This simple expression can be u...

An Introduction to MCMC for Machine Learning

by Christophe Andrieu, et al. , 2003
"... ..."
Abstract - Cited by 382 (5 self) - Add to MetaCart
Abstract not found

Explaining the Gibbs sampler.

by George Casella , Edward I George - American Statistician, , 1992
"... ..."
Abstract - Cited by 381 (3 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...hough the resulting data will be dependent, it will still be the case that the empirical distribution of X,' converges to f(x). Note that from this point of view one can see that the "efficiency of the Gibbs sampler" is determined by the rate of this convergence. Intuitively, this convergence rate will be fastest when X,' moves rapidly through the sample space, a characteristic that may be thought of as mixing. Variations on these and other approaches to exploiting the Gibbs sequence have been suggested by Gelman and Rubin (1991), Geyer (in press), Muller (1991), Ritter and Tanner (1990), and Tierney (1991). 6. DISCUSSION Both the Gibbs sampler and the Data Augmentation Algorithm have found widespread use in practical problems and can be used by either the Bayesian or classical statistician. For the Bayesian, the Gibbs sampler is mainly used to generate posterior distributions, whereas for the classical statistician a major use is for calculation of the likelihood function and characteristics of likelihood esti~nators. Although the theory behind Gibbs sampling is taken from Markov chain theory, there is also a connection to "incomplete data" theory, such as that which forms the basis of the EM a...

Markov chain monte carlo convergence diagnostics

by Kathryn Cowles, Bradley P. Carlin - JASA , 1996
"... A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise ..."
Abstract - Cited by 371 (6 self) - Add to MetaCart
A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise for the future but currently has yielded relatively little that is of practical use in applied work. Consequently, most MCMC users address the convergence problem by applying diagnostic tools to the output produced by running their samplers. After giving a brief overview of the area, we provide an expository review of thirteen convergence diagnostics, describing the theoretical basis and practical implementation of each. We then compare their performance in two simple models and conclude that all the methods can fail to detect the sorts of convergence failure they were designed to identify. We thus recommend a combination of strategies aimed at evaluating and accelerating MCMC sampler convergence, including applying diagnostic procedures to a small number of parallel chains, monitoring autocorrelations and crosscorrelations, and modifying parameterizations or sampling algorithms appropriately. We emphasize, however, that it is not possible to say with certainty that a finite sample from an MCMC algorithm is representative of an underlying stationary distribution. 1
(Show Context)

Citation Context

... so far separated that the state space would be effectively disconnected. We used this example to illustrate non-conjugate full conditionals as well as bimodality. A random-walk Metropolis algorithm (=-=Tierney, 1994-=-) was used to generate from each unnormalized full conditional. Nine parallel chains were run with starting values chosen at equal intervals from above the upper mode to below the lower mode. Plots fo...

Using simulation methods for Bayesian econometric models: Inference, development and communication

by John Geweke - Econometric Review , 1999
"... This paper surveys the fundamental principles of subjective Bayesian inference in econometrics and the implementation of those principles using posterior simulation methods. The emphasis is on the combination of models and the development of predictive distributions. Moving beyond conditioning on a ..."
Abstract - Cited by 356 (16 self) - Add to MetaCart
This paper surveys the fundamental principles of subjective Bayesian inference in econometrics and the implementation of those principles using posterior simulation methods. The emphasis is on the combination of models and the development of predictive distributions. Moving beyond conditioning on a fixed number of completely specified models, the paper introduces subjective Bayesian tools for formal comparison of these models with as yet incompletely specified models. The paper then shows how posterior simulators can facilitate communication between investigators (for example, econometricians) on the one hand and remote clients (for example, decision makers) on the other, enabling clients to vary the prior distributions and functions of interest employed by investigators. A theme of the paper is the practicality of subjective Bayesian methods. To this end, the paper describes publicly available software for Bayesian inference, model development, and communication and provides illustrations using two simple econometric models. *This paper was originally prepared for the Australasian meetings of the Econometric Society in Melbourne, Australia,
(Show Context)

Citation Context

...m of the algorithm is due to Hastings (1970). The Metropolis et al. (1953) form takes q θ m ( ) ,θ * ( )= q θ * ,θ m ( ) ( ). A simple variant that is often useful is the ( m) * * independence chain (=-=Tierney, 1994-=-), whereby q ( θ , θ )= k( θ ). Then ( ) ( ) m ⎧ ( ) θ θ θ m T A ( ) ⎪ ⎫⎪ αθ ( θ )= ⎨ ( m) ⎬ ( m) ⎩⎪ ( θ T A) ( θ ) ⎭⎪ θ = p , k ⎧⎪ ⎫⎪ , min , min⎨ ⎬ p , k ⎩⎪ ( ) ⎭⎪ w * * Y * 1 , 1 , * Y w where wθ p...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University