Results 1  10
of
70
Model Selection and Model Averaging in Phylogenetics: Advantages of Akaike Information Criterion and Bayesian Approaches Over Likelihood Ratio Tests
, 2004
"... Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the sel ..."
Abstract

Cited by 404 (8 self)
 Add to MetaCart
Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the selection of substitution models in phylogenetics from a theoretical, philosophical and practical point of view, and summarize this comparison in table format. We argue that the most commonly implemented model selection approach, the hierarchical likelihood ratio test, is not the optimal strategy for model selection in phylogenetics, and that approaches like the Akaike Information Criterion (AIC) and Bayesian methods offer important advantages. In particular, the latter two methods are able to simultaneously compare multiple nested or nonnested models, assess model selection uncertainty, and allow for the estimation of phylogenies and model parameters using all available models (modelaveraged inference or multimodel inference). We also describe how the relative importance of the different parameters included in substitution models can be depicted. To illustrate some of these points, we have applied AICbased model averaging to 37 mitochondrial DNA sequences from the subgenus Ohomopterus (genus Carabus) ground beetles described by Sota and Vogler (2001).
Integrating Theories of Motivation
, 2003
"... Progress towards understanding human behavior has been hindered by disciplinebound theories, dividing our efforts. Fortunately, these separate endeavors are converging and can be effectively integrated. Focusing on the fundamental features of Picoeconomics, Expectancy, Cumulative Prospect Theory, a ..."
Abstract

Cited by 44 (1 self)
 Add to MetaCart
Progress towards understanding human behavior has been hindered by disciplinebound theories, dividing our efforts. Fortunately, these separate endeavors are converging and can be effectively integrated. Focusing on the fundamental features of Picoeconomics, Expectancy, Cumulative Prospect Theory, and Need Theory, Temporal Motivational Theory (TMT) is constructed. TMT appears consistent with the major findings from many other investigations, including psychobiology. Potential applications of TMT are numerous, including: consumer behavior, aggression, stock market, and governmental behavior.
Studies in Bayesian Confirmation Theory
, 2001
"... According to Bayesian confirmation theory, evidence E (incrementally) confirms (or supports) a hypothesis H (roughly) just in case E and H are positively probabilistically correlated (under an appropriate probability function Pr). There are many logically equivalent ways of saying that E and H are ..."
Abstract

Cited by 34 (8 self)
 Add to MetaCart
According to Bayesian confirmation theory, evidence E (incrementally) confirms (or supports) a hypothesis H (roughly) just in case E and H are positively probabilistically correlated (under an appropriate probability function Pr). There are many logically equivalent ways of saying that E and H are correlated under Pr. Surprisingly, this leads to a plethora of nonequivalent quantitative measures of the degree to which E confirms H (under Pr). In fact, many nonequivalent Bayesian measures of the degree to which E confirms (or supports) H have been proposed and defended in the literature on inductive logic. I provide a thorough historical survey of the various proposals, and a detailed discussion of the philosophical ramifications of the differences between them. I argue that the set of candidate
Provenance of correlations in psychological data
 PSYCHONOMIC BULLETIN & REVIEW
, 2005
"... ..."
In defense of goodnessoffit in comparison of models to data. Unpublished manuscript
, 2002
"... While the representational theory of mind is without doubt one of the central theoretical underpinnings of Cognitive Science, the use of cognitive simulation models to explain human behavior is one of its main methodological tools. Not only does cognitive modeling exemplify a view of human thinking ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
(Show Context)
While the representational theory of mind is without doubt one of the central theoretical underpinnings of Cognitive Science, the use of cognitive simulation models to explain human behavior is one of its main methodological tools. Not only does cognitive modeling exemplify a view of human thinking as symbol processing, it also provides us with rigorous qualitative and quantitative predictions that can be used to discriminate between alternative models and uncover which aspects of a given theoretical framework require further elaboration. In his scientific life, Werner Tack moved from (numerical) mathematical theories (Tack, 1969) to computational (symbolic) models of human cognition (Tack, 2004) — and both authors of the present paper had the luck to experience his vivid enthusiasm for formal approaches to the study of mind. The formal rigor of theorizing in terms of such computational approaches has, however, not yet been met by equally rigor procedures for evaluating the goodness of fit of cognitive models to data — and we remember many fruitful discussions with Werner Tack on this topic. In this paper we aim at addressing the gap between the rigor in model construction and model evaluation, and look forward to the next round of exciting discussions with Werner Tack.
Accumulative prediction error and the selection of time series models
, 2006
"... This article reviews the rationale for using accumulative onestepahead prediction error (APE) as a datadriven method for model selection. Theoretically, APE is closely related to Bayesian model selection and the method of minimum description length (MDL). The sole requirement for using APE is tha ..."
Abstract

Cited by 25 (5 self)
 Add to MetaCart
This article reviews the rationale for using accumulative onestepahead prediction error (APE) as a datadriven method for model selection. Theoretically, APE is closely related to Bayesian model selection and the method of minimum description length (MDL). The sole requirement for using APE is that the models under consideration are capable of generating a prediction for the next, unseen data point. This means that APE may be readily applied to selection problems involving very complex models. APE automatically takes the functional form of parameters into account, and the ‘plugin’ version of APE does not require the specification of priors. APE is particularly easy to compute for data that have a natural ordering, such as time series. Here, we explore the possibility of using APE to discriminate the shortrange ARMA(1,1) model from the longrange ARFIMAð0; d; 0Þ model. We also illustrate how APE may be used for model metaselection, allowing one to choose between different model selection methods.
Threeway decisions with probabilistic rough sets
 Information Sciences
, 2010
"... The rough set theory approximates a concept by three regions, namely, the positive, boundary and negative regions. Rules constructed from the three regions are associated with different actions and decisions, which immediately leads to the notion of threeway decision rules. A positive rule makes a ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
(Show Context)
The rough set theory approximates a concept by three regions, namely, the positive, boundary and negative regions. Rules constructed from the three regions are associated with different actions and decisions, which immediately leads to the notion of threeway decision rules. A positive rule makes a decision of acceptance, a negative rule makes a decision of rejection, and a boundary rule makes a decision of abstaining. This paper provides an analysis of threeway decision rules in the classical rough set model and the decisiontheoretic rough set model. The results enrich the rough set theory by ideas from Bayesian decision theory and hypothesis testing in statistics. The connections established between the levels of tolerance for errors and costs of incorrect decisions make the rough set theory practical in applications. Key words: Decisiontheoretic rough sets; probabilistic rough sets; threeway decisions; hypothesis testing; Bayesian decision procedure; classification 1
Effective field theories, reductionism and scientific explanation
 Studies in the History and Philosophy of Modern Physics
"... Effective field theories have been a very popular tool in quantum physics for almost two decades. And there are good reasons for this. I will argue that effective field theories share many of the advantages of both fundamental theories and phenomenological models, while avoiding their respective sho ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
(Show Context)
Effective field theories have been a very popular tool in quantum physics for almost two decades. And there are good reasons for this. I will argue that effective field theories share many of the advantages of both fundamental theories and phenomenological models, while avoiding their respective shortcomings. They are, for example, flexible enough to cover a wide range of phenomena, and concrete enough to provide a detailed story of the specific mechanisms at work at a given energy scale. So will all of physics eventually converge on effective field theories? This paper argues that good scientific research can be characterised by a fruitful interaction between fundamental theories, phenomenological models and effective field theories. All of them have their appropriate functions in the research process, and all of them are indispensable. They complement each other and hang together in a coherent way which I shall characterise in some detail. To illustrate all this I will present a case study from nuclear and particle physics. The resulting view about scientific theorising is inherently pluralistic, and has implications for the debates about reductionism and
2001): “Why Likelihood
 The Nature of Scientific Evidence
, 1980
"... ABSTRACT: The Likelihood Principle has been defended on Bayesian grounds, on the grounds that it coincides with and systematizes intuitive judgments about example problems, and by appeal to the fact that it generalizes what is true when hypotheses have deductive consequences about observations. Here ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
ABSTRACT: The Likelihood Principle has been defended on Bayesian grounds, on the grounds that it coincides with and systematizes intuitive judgments about example problems, and by appeal to the fact that it generalizes what is true when hypotheses have deductive consequences about observations. Here we divide the Principle into two parts one qualitative, the other quantitative and evaluate each in the light of the Akaike information criterion. Both turn out to be correct in a special case (when the competing hypotheses have the same number of adjustable parameters), but not otherwise.
Five principles for studying people’s use of heuristics
 Acta Psychologica Sinica
, 2010
"... Abstract: The fast and frugal heuristics framework assumes that people rely on an adaptive toolbox of simple decision strategies—called heuristics—to make inferences, choices, estimations, and other decisions. Each of these heuristics is tuned to regularities in the structure of the task environment ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
(Show Context)
Abstract: The fast and frugal heuristics framework assumes that people rely on an adaptive toolbox of simple decision strategies—called heuristics—to make inferences, choices, estimations, and other decisions. Each of these heuristics is tuned to regularities in the structure of the task environment and each is capable of exploiting the ways in which basic cognitive capacities work. In doing so, heuristics enable adaptive behavior. In this article, we give an overview of the framework and formulate five principles that should guide the study of people’s adaptive toolbox. We emphasize that models of heuristics should be (i) precisely defined; (ii) tested comparatively; (iii) studied in line with theories of strategy selection; (iv) evaluated by how well they predict new data; and (vi) tested in the real world in addition to the laboratory. Key words: fast and frugal heuristics; experimental design; model testing As we write this article, international financial markets are in turmoil. Large banks are going bankrupt almost daily. It is a difficult situation for financial decision makers — regardless of whether they are lay investors trying to make smallscale profits here and there or professionals employed by the finance industry. To safeguard their investments, these decision makers need to be able to foresee uncertain future economic developments, such as which investments are likely to be the safest and which companies are likely to crash next. In times of rapid waves of potentially devastating financial crashes, these informed bets must often be made quickly, with little time for extensive information search or computationally demanding calculations of likely future returns. Lay stock traders in particular have to trust the contents of their memories, relying on incomplete, imperfect