Results 1 - 10
of
55
Bayesian Fundamentalism or Enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition
- Behavioral and Brain Sciences
, 2011
"... To be published in Behavioral and Brain Sciences (in press) ..."
Abstract
-
Cited by 43 (1 self)
- Add to MetaCart
(Show Context)
To be published in Behavioral and Brain Sciences (in press)
Hierarchical Bayesian parameter estimation for cumulative prospect theory
, 2011
"... a b s t r a c t Cumulative prospect theory (CPT Tversky & Kahneman, 1992) has provided one of the most influential accounts of how people make decisions under risk. CPT is a formal model with parameters that quantify psychological processes such as loss aversion, subjective values of gains and ..."
Abstract
-
Cited by 14 (2 self)
- Add to MetaCart
(Show Context)
a b s t r a c t Cumulative prospect theory (CPT Tversky & Kahneman, 1992) has provided one of the most influential accounts of how people make decisions under risk. CPT is a formal model with parameters that quantify psychological processes such as loss aversion, subjective values of gains and losses, and subjective probabilities. In practical applications of CPT, the model's parameters are usually estimated using a singleparticipant maximum likelihood approach. The present study shows the advantages of an alternative, hierarchical Bayesian parameter estimation procedure. Performance of the procedure is illustrated with a parameter recovery study and application to a real data set. The work reveals that without particular constraints on the parameter space, CPT can produce loss aversion without the parameter that has traditionally been associated with loss aversion. In general, the results illustrate that inferences about people's decision processes can crucially depend on the method used to estimate model parameters.
Five principles for studying people’s use of heuristics
- Acta Psychologica Sinica
, 2010
"... Abstract: The fast and frugal heuristics framework assumes that people rely on an adaptive toolbox of simple decision strategies—called heuristics—to make inferences, choices, estimations, and other decisions. Each of these heuristics is tuned to regularities in the structure of the task environment ..."
Abstract
-
Cited by 12 (4 self)
- Add to MetaCart
(Show Context)
Abstract: The fast and frugal heuristics framework assumes that people rely on an adaptive toolbox of simple decision strategies—called heuristics—to make inferences, choices, estimations, and other decisions. Each of these heuristics is tuned to regularities in the structure of the task environment and each is capable of exploiting the ways in which basic cognitive capacities work. In doing so, heuristics enable adaptive behavior. In this article, we give an overview of the framework and formulate five principles that should guide the study of people’s adaptive toolbox. We emphasize that models of heuristics should be (i) precisely defined; (ii) tested comparatively; (iii) studied in line with theories of strategy selection; (iv) evaluated by how well they predict new data; and (vi) tested in the real world in addition to the laboratory. Key words: fast and frugal heuristics; experimental design; model testing As we write this article, international financial markets are in turmoil. Large banks are going bankrupt almost daily. It is a difficult situation for financial decision makers — regardless of whether they are lay investors trying to make small-scale profits here and there or professionals employed by the finance industry. To safeguard their investments, these decision makers need to be able to foresee uncertain future economic developments, such as which investments are likely to be the safest and which companies are likely to crash next. In times of rapid waves of potentially devastating financial crashes, these informed bets must often be made quickly, with little time for extensive information search or computationally demanding calculations of likely future returns. Lay stock traders in particular have to trust the contents of their memories, relying on incomplete, imperfect
A model of knower-level behavior in number concept development
- Cognitive Science
, 2010
"... We develop and evaluate a model of behavior on the Give-N task, a commonly used measure of young children’s number knowledge. Our model uses the knower-level theory of how children repre-sent numbers. To produce behavior on the Give-N task, the model assumes that children start out with a base rate ..."
Abstract
-
Cited by 11 (1 self)
- Add to MetaCart
(Show Context)
We develop and evaluate a model of behavior on the Give-N task, a commonly used measure of young children’s number knowledge. Our model uses the knower-level theory of how children repre-sent numbers. To produce behavior on the Give-N task, the model assumes that children start out with a base rate that makes some answers more likely a priori than others but is updated on each experi-mental trial in a way that depends on the interaction between the experimenter’s request and the child’s knower level. We formalize this process as a generative graphical model, so that the parame-ters—including the base rate distribution and each child’s knower level—can be inferred from data using Bayesian methods. Using this approach, we evaluate the model on previously published data from 82 children spanning the whole developmental range. The model provides an excellent fit to these data, and the inferences about the base rate and knower levels are interpretable and insightful. We discuss how our modeling approach can be extended to other developmental tasks and can be used to help evaluate alternative theories of number representation against the knower-level theory.
A signal detection analysis of fast-and-frugal trees
- Psychological Review
, 2011
"... Models of decision making are distinguished by those that aim for an optimal solution in a world that is precisely specified by a set of assumptions (a so-called “small world”) and those that aim for a simple but satisfactory solution in an uncertain world where the assumptions of optimization model ..."
Abstract
-
Cited by 8 (3 self)
- Add to MetaCart
(Show Context)
Models of decision making are distinguished by those that aim for an optimal solution in a world that is precisely specified by a set of assumptions (a so-called “small world”) and those that aim for a simple but satisfactory solution in an uncertain world where the assumptions of optimization models may not be met (a so-called “large world”). Few connections have been drawn between these 2 families of models. In this study, the authors show how psychological concepts originating in the classic signal-detection theory (SDT), a small-world approach to decision making, can be used to understand the workings of a class of simple models known as fast-and-frugal trees (FFTs). Results indicate that (a) the setting of the subjective decision criterion in SDT corresponds directly to the choice of exit structure in an FFT; (b) the sensitivity of an FFT (measured in d�) is reflected by the order of cues searched and the properties of cues in an FFT, including the mean and variance of cues ’ individual d�s, the intercue correlation, and the number of cues; and (c) compared with the ideal and the optimal sequential sampling models in SDT and a majority model with an information search component, FFTs are extremely frugal (i.e., do not search for much cue information), highly robust, and well adapted to the payoff structure of a task. These findings demonstrate the potential of theory integration in understanding the common underlying psychological structures of apparently disparate theories of cognition.
A Model-Based Approach to Measuring Expertise in Ranking Tasks
"... We apply a cognitive modeling approach to the problem of measuring expertise on rank ordering tasks. In these tasks, people must order a set of items in terms of a given criterion. Using a cognitive model of behavior on this task that allows for individual differences in knowledge, we are able to in ..."
Abstract
-
Cited by 7 (3 self)
- Add to MetaCart
(Show Context)
We apply a cognitive modeling approach to the problem of measuring expertise on rank ordering tasks. In these tasks, people must order a set of items in terms of a given criterion. Using a cognitive model of behavior on this task that allows for individual differences in knowledge, we are able to infer people’s expertise directly from the rankings they provide. We show that our model-based measure of expertise outperforms self-report measures, taken both before and after doing the task, in terms of correlation with the actual accuracy of the answers. Based on these results, we discuss the potential and limitations of using cognitive models in assessing expertise.
Number-knower levels in young children: Insights from Bayesian modeling
"... This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or sel ..."
Abstract
-
Cited by 6 (0 self)
- Add to MetaCart
(Show Context)
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouraged to visit:
Individual Differences in Attention During Category Learning
"... A central idea in many successful models of category learning—including the Generalized Context Model (GCM)—is that people selectively attend to those dimensions of stimuli that are relevant for dividing them into categories. We use the GCM to re-examine some previously analyzed category learning da ..."
Abstract
-
Cited by 6 (0 self)
- Add to MetaCart
A central idea in many successful models of category learning—including the Generalized Context Model (GCM)—is that people selectively attend to those dimensions of stimuli that are relevant for dividing them into categories. We use the GCM to re-examine some previously analyzed category learning data, but extend the modeling to allow for individual differences. Our modeling suggests a very different psychological interpretation of the data from the standard account. Rather than concluding that people attend to both di-mensions, because they are both relevant to the category structure, we conclude that it is possible there are two groups of people, both of whom attend to only one of the dimensions. We discuss the need to allow for individual differences in models of category learning, and argue for hierarchical mixture models as a way of achieving this flexibility in accounting for people’s cognition.
Discriminating among probability weighting functions using adaptive design optimization
- Journal of Risk and Uncertainty
, 2013
"... Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to di ..."
Abstract
-
Cited by 5 (4 self)
- Add to MetaCart
(Show Context)
Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to discriminate among them empirically. In this paper, we use both simulation and choice experiments to investigate the extent to which different parametric forms of the proba-bility weighting function can be discriminated using adaptive design optimization, a computer-based methodology that identifies and exploits model differences for the purpose of model dis-crimination. The simulation experiments show that the correct (data-generating) form can be conclusively discriminated from its competitors. The results of an empirical experiment reveal heterogeneity between participants in terms of the functional form, with two models (Prelec-2, Linear in Log Odds) emerging as the most common best-fitting models. The findings shed light on assumptions underlying these models.
Measuring model complexity with the prior predictive
"... In the last few decades, model complexity has received a lot of press. While many methods have been proposed that jointly measure a model’s descriptive adequacy and its complexity, few measures exist that measure complexity in itself. Moreover, existing measures ignore the parameter prior, which is ..."
Abstract
-
Cited by 4 (2 self)
- Add to MetaCart
In the last few decades, model complexity has received a lot of press. While many methods have been proposed that jointly measure a model’s descriptive adequacy and its complexity, few measures exist that measure complexity in itself. Moreover, existing measures ignore the parameter prior, which is an inherent part of the model and affects the complexity. This paper presents a stand alone measure for model complexity, that takes the number of parameters, the functional form, the range of the parameters and the parameter prior into account. This Prior Predictive Complexity (PPC) is an intuitive and easy to compute measure. It starts from the observation that model complexity is the property of the model that enables it to fit a wide range of outcomes. The PPC then measures how wide this range exactly is.