Results 1  10
of
430
Slice sampling
 Annals of Statistics
, 2000
"... Abstract. Markov chain sampling methods that automatically adapt to characteristics of the distribution being sampled can be constructed by exploiting the principle that one can sample from a distribution by sampling uniformly from the region under the plot of its density function. A Markov chain th ..."
Abstract

Cited by 305 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Markov chain sampling methods that automatically adapt to characteristics of the distribution being sampled can be constructed by exploiting the principle that one can sample from a distribution by sampling uniformly from the region under the plot of its density function. A Markov chain that converges to this uniform distribution can be constructed by alternating uniform sampling in the vertical direction with uniform sampling from the horizontal ‘slice ’ defined by the current vertical position, or more generally, with some update that leaves the uniform distribution over this slice invariant. Variations on such ‘slice sampling ’ methods are easily implemented for univariate distributions, and can be used to sample from a multivariate distribution by updating each variable in turn. This approach is often easier to implement than Gibbs sampling, and more efficient than simple Metropolis updates, due to the ability of slice sampling to adaptively choose the magnitude of changes made. It is therefore attractive for routine and automated use. Slice sampling methods that update all variables simultaneously are also possible. These methods can adaptively choose the magnitudes of changes made to each variable, based on the local properties of the density function. More ambitiously, such methods could potentially allow the sampling to adapt to dependencies between variables by constructing local quadratic approximations. Another approach is to improve sampling efficiency by suppressing random walks. This can be done using ‘overrelaxed ’ versions of univariate slice sampling procedures, or by using ‘reflective ’ multivariate slice sampling methods, which bounce off the edges of the slice.
Random effects structure for confirmatory hypothesis testing: Keep it maximal.
 Journal of Memory and Language,
, 2013
"... Abstract Linear mixedeffects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, there is currently little understanding of how different random effects structures affect generalizability. Here, we argue that researchers using LMEMs for confirmatory h ..."
Abstract

Cited by 151 (5 self)
 Add to MetaCart
(Show Context)
Abstract Linear mixedeffects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, there is currently little understanding of how different random effects structures affect generalizability. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the standards that have been in place for many decades. Through theoretical arguments and Monte Carlo simulation, we show that LMEMs generalize best when they include the maximal random effects structure justified by the design. In contrast, LMEMs including the maximal random
Church: A language for generative models
 In UAI
, 2008
"... Formal languages for probabilistic modeling enable reuse, modularity, and descriptive clarity, and can foster generic inference techniques. We introduce Church, a universal language for describing stochastic generative processes. Church is based on the Lisp model of lambda calculus, containing a pu ..."
Abstract

Cited by 141 (27 self)
 Add to MetaCart
(Show Context)
Formal languages for probabilistic modeling enable reuse, modularity, and descriptive clarity, and can foster generic inference techniques. We introduce Church, a universal language for describing stochastic generative processes. Church is based on the Lisp model of lambda calculus, containing a pure Lisp as its deterministic subset. The semantics of Church is defined in terms of evaluation histories and conditional distributions on such histories. Church also includes a novel language construct, the stochastic memoizer, which enables simple description of many complex nonparametric models. We illustrate language features through several examples, including: a generalized Bayes net in which parameters cluster over trials, infinite PCFGs, planning by inference, and various nonparametric clustering models. Finally, we show how to implement query on any Church program, exactly and approximately, using Monte Carlo techniques. 1
Variational message passing
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2005
"... This paper presents Variational Message Passing (VMP), a general purpose algorithm for applying variational inference to a Bayesian Network. Like belief propagation, Variational Message Passing proceeds by passing messages between nodes in the graph and updating posterior beliefs using local operati ..."
Abstract

Cited by 134 (10 self)
 Add to MetaCart
(Show Context)
This paper presents Variational Message Passing (VMP), a general purpose algorithm for applying variational inference to a Bayesian Network. Like belief propagation, Variational Message Passing proceeds by passing messages between nodes in the graph and updating posterior beliefs using local operations at each node. Each such update increases a lower bound on the log evidence (unless already at a local maximum). In contrast to belief propagation, VMP can be applied to a very general class of conjugateexponential models because it uses a factorised variational approximation. Furthermore, by introducing additional variational parameters, VMP can be applied to models containing nonconjugate distributions. The VMP framework also allows the lower bound to be evaluated, and this can be used both for model comparison and for detection of convergence. Variational Message Passing has been implemented in the form of a general purpose inference engine called VIBES (‘Variational Inference for BayEsian networkS’) which allows models to be specified graphically and then solved variationally without recourse to coding.
BUGS for a Bayesian Analysis of Stochastic Volatility Models
, 2000
"... This paper reviews the general Bayesian approach to parameter estimation in stochastic volatility models with posterior computations performed by Gibbs sampling. The main purpose is to illustrate the ease with which the Bayesian stochastic volatility model can now be studied routinely via BUGS (Baye ..."
Abstract

Cited by 59 (17 self)
 Add to MetaCart
This paper reviews the general Bayesian approach to parameter estimation in stochastic volatility models with posterior computations performed by Gibbs sampling. The main purpose is to illustrate the ease with which the Bayesian stochastic volatility model can now be studied routinely via BUGS (Bayesian Inference Using Gibbs Sampling), a recently developed, userfriendly, and freely available software package. It is an ideal software tool for the exploratory phase of model building as any modifications of a model including changes of priors and sampling error distributions are readily realized with only minor changes of the code. However, due to the single move Gibbs sampler, convergence can be slow. BUGS automates the calculation of the full conditional posterior distributions using a model representation by directed acyclic graphs. It contains an expert system for choosing an effective sampling method for each full conditional. Furthermore, software for convergence diagnostics and statistical summaries is available for the BUGS output
A survey of model evaluation approaches with a tutorial on hierarchical Bayesian methods.
, 2008
"... Abstract We review current methods for evaluating models in the cognitive sciences, including theoreticallybased approaches, such as Bayes Factors and MDL measures, simulation approaches, including model mimicry evaluations, and practical approaches, such as validation and generalization measures. ..."
Abstract

Cited by 55 (18 self)
 Add to MetaCart
(Show Context)
Abstract We review current methods for evaluating models in the cognitive sciences, including theoreticallybased approaches, such as Bayes Factors and MDL measures, simulation approaches, including model mimicry evaluations, and practical approaches, such as validation and generalization measures. We argue that, while often useful in specific settings, most of these approaches are limited in their ability to give a general assessment of models. We argue that hierarchical methods generally, and hierarchical Bayesian methods specifically, can provide a more thorough evaluation of models in the cognitive sciences. We present two worked examples of hierarchical Bayesian analyses, to demonstrate how the approach addresses key questions of descriptive adequacy, parameter interference, prediction, and generalization in principled and coherent ways.
Bayesian statistical methods for genetic association studies
, 2009
"... The usual (frequentist) approach to assessing evidence for a population association between genetic variants and a phenotype of interest is to compute a pvalue for the null hypothesis (H 0 ) of no association. Despite their widespread use, pvalues have a striking and fundamental limitation Bayes ..."
Abstract

Cited by 46 (4 self)
 Add to MetaCart
The usual (frequentist) approach to assessing evidence for a population association between genetic variants and a phenotype of interest is to compute a pvalue for the null hypothesis (H 0 ) of no association. Despite their widespread use, pvalues have a striking and fundamental limitation Bayesian methods provide an alternative approach to assessing associations that alleviates the limitations of pvalues at the cost of some additional modelling assumptions. For example, a Bayesian analysis requires explicit assumptions about effect sizes at truly associated SNPs. Because of computational constraints, Bayesian approaches were not widely used until about 15 years ago, since when they have become more prevalent in many areas of science, including genetics 49 . This advance is now extending to genetic association studies, as recent papers have shown practical and theoretical advantages of using Bayesian approaches for the assessment of association Many genetics researchers are currently unfamiliar with Bayesian methods, and some may be reluctant to adopt them because they fear that editors and reviewers will also be unfamiliar with them. However, we believe that the benefits of Bayesian methods will lead to their widespread use in future genetic association analyses. Bayesian methods compute measures of evidence that can be directly compared among SNPs within and across studies. In addition, they provide a rational and quantitative way to incorporate biological information, and they can allow for a range of possible genetic models in a single analysis. Moreover, Bayesian approaches allow a coherent approach to combining results across studies (metaanalysis), across SNPs in genes and across gene pathways, which will be increasingly important as we move from singleSNP analyses towards more integrative approaches. In this Review, we present a guide for newcomers to understanding and implementing a Bayesian analysis in some of the most common settings. We focus particularly on the additional modelling assumptions Abstract  Bayesian statistical methods have recently made great inroads into many areas of science, and this advance is now extending to the assessment of association between genetic variants and disease or other phenotypes. We review these methods, focusing on singleSNP tests in genomewide association studies. We discuss the advantages of the Bayesian approach over classical (frequentist) approaches in this setting and provide a tutorial on basic analysis steps, including practical guidelines for appropriate prior specification. We demonstrate the use of Bayesian methods for fine mapping in candidate regions, discuss metaanalyses and provide guidance for refereeing manuscripts that contain Bayesian analyses.
Erlbaum Associates
 Goal Attainment Scaling: Applications, theory, and
, 1994
"... statistical analysis of behavioral facilitation ..."
Bayesian hypothesis testing for psychologists: A tutorial on the SavageDickey method.
 Cognitive Psychology,
, 2010
"... a b s t r a c t In the field of cognitive psychology, the pvalue hypothesis test has established a stranglehold on statistical reporting. This is unfortunate, as the pvalue provides at best a rough estimate of the evidence that the data provide for the presence of an experimental effect. An alter ..."
Abstract

Cited by 34 (13 self)
 Add to MetaCart
(Show Context)
a b s t r a c t In the field of cognitive psychology, the pvalue hypothesis test has established a stranglehold on statistical reporting. This is unfortunate, as the pvalue provides at best a rough estimate of the evidence that the data provide for the presence of an experimental effect. An alternative and arguably more appropriate measure of evidence is conveyed by a Bayesian hypothesis test, which prefers the model with the highest average likelihood. One of the main problems with this Bayesian hypothesis test, however, is that it often requires relatively sophisticated numerical methods for its computation. Here we draw attention to the SavageDickey density ratio method, a method that can be used to compute the result of a Bayesian hypothesis test for nested models and under certain plausible restrictions on the parameter priors. Practical examples demonstrate the method's validity, generality, and flexibility.