Results 1  10
of
550
CODA: Convergence diagnosis and output analysis software for Gibbs sampling output.
, 1995
"... ..."
Generalized linear mixed models: a practical guide for ecology and evolution.
 Trends in Ecology and Evolution,
, 2009
"... How should ecologists and evolutionary biologists analyze nonnormal data that involve random effects? Nonnormal data such as counts or proportions often defy classical statistical procedures. Generalized linear mixed models (GLMMs) provide a more flexible approach for analyzing nonnormal data when ..."
Abstract

Cited by 183 (1 self)
 Add to MetaCart
How should ecologists and evolutionary biologists analyze nonnormal data that involve random effects? Nonnormal data such as counts or proportions often defy classical statistical procedures. Generalized linear mixed models (GLMMs) provide a more flexible approach for analyzing nonnormal data when random effects are present. The explosion of research on GLMMs in the last decade has generated considerable uncertainty for practitioners in ecology and evolution. Despite the availability of accurate techniques for estimating GLMM parameters in simple cases, complex GLMMs are challenging to fit and statistical inference such as hypothesis testing remains difficult. We review the use (and misuse) of GLMMs in ecology and evolution, discuss estimation and inference and summarize 'bestpractice' data analysis procedures for scientists facing this challenge. Generalized linear mixed models: powerful but challenging tools Data sets in ecology and evolution (EE) Researchers faced with nonnormal data often try shortcuts such as transforming data to achieve normality and homogeneity of variance, using nonparametric tests or relying on the robustness of classical ANOVA to nonnormality for balanced designs Instead of shoehorning their data into classical statistical frameworks, researchers should use statistical approaches that match their data. Generalized linear mixed models (GLMMs) combine the properties of two statistical frameworks that are widely used in EE, linear mixed models (which incorporate random effects) and generalized linear models (which handle nonnormal data by using link functions and exponential family [e.g. normal, Poisson or binomial] distributions). GLMMs are the best tool for analyzing nonnormal data that involve random effects: all one has to do, in principle, is specify a distribution, link function and structure of the random effects. For example, in Box 1, we use a GLMM to quantify the magnitude of the genotypeenvironment interaction in the response of Arabidopsis to herbivory. To do so, we select a Poisson distribution with a logarithmic link (typical for count data) and specify that the total number of fruits per plant and the responses to fertilization and clipping could vary randomly across populations and across genotypes within a population. However, GLMMs are surprisingly challenging to use even for statisticians. Although several software packages can handle GLMMs
The TimeVarying Volatility of Macroeconomic Fluctuations
, 2006
"... In this paper we investigate the sources of the important shifts in the volatility of U.S. macroeconomic variables in the postwar period. To this end, we propose the estimation of DSGE models allowing for time variation in the volatility of the structural innovations. We apply our estimation strate ..."
Abstract

Cited by 160 (5 self)
 Add to MetaCart
In this paper we investigate the sources of the important shifts in the volatility of U.S. macroeconomic variables in the postwar period. To this end, we propose the estimation of DSGE models allowing for time variation in the volatility of the structural innovations. We apply our estimation strategy to a largescale model of the business cycle and find that investment specific technology shocks account for most of the sharp decline in volatility of the last two decades.
AWTY (Are We There Yet?): a system for graphical exploration of MCMC convergence in Bayesian phylogenetics
, 2007
"... Summary: A key element to a successful Markov chain Monte Carlo (MCMC) inference is the programming and run performance of the Markov chain. However, the explicit use of quality assessments of the MCMC simulations—convergence diagnostics—in phylogenetics is still uncommon. Here we present a simple t ..."
Abstract

Cited by 108 (5 self)
 Add to MetaCart
Summary: A key element to a successful Markov chain Monte Carlo (MCMC) inference is the programming and run performance of the Markov chain. However, the explicit use of quality assessments of the MCMC simulations—convergence diagnostics—in phylogenetics is still uncommon. Here we present a simple tool that uses the output from MCMC simulations and visualizes a number of properties of primary interest in a Bayesian phylogenetic analysis, such as convergence rates of posterior split probabilities and branch lengths. Graphical exploration of the output from phylogenetic MCMC simulations gives intuitive and often crucial information on the success and reliability of the analysis. The tool presented here complements convergence diagnostics already available in other software packages primarily designed for other applications of MCMC. Importantly, the common practice of using traceplots of a single parameter or summary statistic, such as the likelihood score of sampled trees, can be misleading for assessing the success of a phylogenetic MCMC simulation.
Decomposable Graphical Gaussian Model Determination
, 1999
"... We propose a methodology for Bayesian model determination in decomposable graphical gaussian models. To achieve this aim we consider a hyper inverse Wishart prior distribution on the concentration matrix for each given graph. To ensure compatibility across models, such prior distributions are obt ..."
Abstract

Cited by 106 (12 self)
 Add to MetaCart
We propose a methodology for Bayesian model determination in decomposable graphical gaussian models. To achieve this aim we consider a hyper inverse Wishart prior distribution on the concentration matrix for each given graph. To ensure compatibility across models, such prior distributions are obtained by marginalisation from the prior conditional on the complete graph. We explore alternative structures for the hyperparameters of the latter, and their consequences for the model. Model determination is carried out by implementing a reversible jump MCMC sampler. In particular, the dimensionchanging move we propose involves adding or dropping an edge from the graph. We characterise the set of moves which preserve the decomposability of the graph, giving a fast algorithm for maintaining the junction tree representation of the graph at each sweep. As state variable, we propose to use the incomplete variancecovariance matrix, containing only the elements for which the correspondi...
Assessing Convergence of Markov Chain Monte Carlo Algorithms
 Statistics and Computing
, 1997
"... We motivate the use of convergence diagnostic techniques for Markov Chain Monte Carlo algorithms and review various methods proposed in the MCMC literature. A common notation is established and each method is discussed with particular emphasis on implementational issues and possible extensions. The ..."
Abstract

Cited by 85 (11 self)
 Add to MetaCart
We motivate the use of convergence diagnostic techniques for Markov Chain Monte Carlo algorithms and review various methods proposed in the MCMC literature. A common notation is established and each method is discussed with particular emphasis on implementational issues and possible extensions. The methods are compared in terms of their interpretability and applicability and recommendations are provided for particular classes of problems. 1 Introduction There are many important implementational issues associated with MCMC methods. These include (amongst others) the choice of sampler, the number of independent replications to be run, the choice of starting values and both estimation and efficiency problems. In practice, we use ergodic averages over realisations of a Markov chain to estimate functionals of interest. In order to reduce the possibility of bias caused by the effect of starting values, iterates within an initial transient phase or burn in period are usually discarded. One o...
The New AreaWide Model of the euro area: a microfounded openeconomy model for forecasting and policy analysis
, 2008
"... ..."
Comparing Bootstrap and Posterior Probability Values in the FourTaxon Case
, 2003
"... Assessment of the reliability of a given phylogenetic hypothesis is an important step in phylogenetic analysis. Historically, the nonparametric bootstrap procedure has been the most frequently used method for assessing the support for specific phylogenetic relationships. The recent employment of Bay ..."
Abstract

Cited by 61 (4 self)
 Add to MetaCart
Assessment of the reliability of a given phylogenetic hypothesis is an important step in phylogenetic analysis. Historically, the nonparametric bootstrap procedure has been the most frequently used method for assessing the support for specific phylogenetic relationships. The recent employment of Bayesian methods for phylogenetic inference problems has resulted in clade support being expressed in terms of posterior probabilities. We used simulated data and the fourtaxon case to explore the relationship between nonparametric bootstrap values (as inferred by maximum likelihood) and posterior probabilities (as inferred by Bayesian analysis). The results suggest a complex association between the two measures. Three general regions of tree space can be identified: (1) the neutral zone, where differences between mean bootstrap and mean posterior probability values are not significant, (2) near the twobranch corner, and (3) deep in the twobranch corner. In the last two regions, significant differences occur between mean bootstrap and mean posterior probability values. Whether bootstrap or posterior probability values are higher depends on the data in support of alternative topologies. Examination of star topologies revealed that both bootstrap and posterior probability values differ significantly from theoretical expectations;