Results 1  10
of
48
The Bayesian reader: Explaining word recognition as an optimal Bayesian decision process
 PSYCHOL. REV
"... This paper presents a theory of visual word recognition that assumes that, in the tasks of word identification, lexical decision and semantic categorization, human readers behave as optimal Bayesian decisionmakers. This leads to the development of a computational model of word recognition, the Baye ..."
Abstract

Cited by 69 (5 self)
 Add to MetaCart
This paper presents a theory of visual word recognition that assumes that, in the tasks of word identification, lexical decision and semantic categorization, human readers behave as optimal Bayesian decisionmakers. This leads to the development of a computational model of word recognition, the Bayesian Reader. The Bayesian Reader successfully simulates some of the most significant data on human reading. The model accounts for the nature of the function relating wordfrequency to reaction time and identification threshold, the effects of neighborhood density and its interaction with frequency, and the variation in the pattern of neighborhood density effects seen in different experimental tasks. Both the general behavior of the model, and the way the model predicts different patterns of results in different tasks, follow entirely from the assumption that human readers approximate optimal Bayesian decisionmakers.
Rational approximations to rational models: Alternative algorithms for category learning
"... Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible fo ..."
Abstract

Cited by 61 (19 self)
 Add to MetaCart
(Show Context)
Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible for behavior. A basic challenge for rational models is thus explaining how optimal solutions can be approximated by psychological processes. We outline a general strategy for answering this question, namely to explore the psychological plausibility of approximation algorithms developed in computer science and statistics. In particular, we argue that Monte Carlo methods provide a source of “rational process models” that connect optimal solutions to psychological processes. We support this argument through a detailed example, applying this approach to Anderson’s (1990, 1991) Rational Model of Categorization (RMC), which involves a particularly challenging computational problem. Drawing on a connection between the RMC and ideas from nonparametric Bayesian statistics, we propose two alternative algorithms for approximate inference in this model. The algorithms we consider include Gibbs sampling, a procedure
Simple heuristics and rules of thumb: Where psychologists and behavioural biologists might meet
, 2005
"... ..."
Bayesian Fundamentalism or Enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition
 Behavioral and Brain Sciences
, 2011
"... To be published in Behavioral and Brain Sciences (in press) ..."
Abstract

Cited by 43 (1 self)
 Add to MetaCart
(Show Context)
To be published in Behavioral and Brain Sciences (in press)
Language Evolution by Iterated Learning With Bayesian Agents
, 2007
"... Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Ba ..."
Abstract

Cited by 41 (9 self)
 Add to MetaCart
Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Bayesian inference, assuming that learners compute a posterior distribution over languages by combining a prior (representing their inductive biases) with the evidence provided by linguistic data. We show that when learners sample languages from this posterior distribution, iterated learning converges to a distribution over languages that is determined entirely by the prior. Under these conditions, iterated learning is a form of Gibbs sampling, a widelyused Markov chain Monte Carlo algorithm. The consequences of iterated learning are more complicated when learners choose the language with maximum posterior probability, being affected by both the prior of the learners and the amount of information transmitted between generations. We show that in this case, iterated learning corresponds to another statistical inference algorithm, a variant of the expectationmaximization (EM) algorithm. These results clarify the role of iterated learning in explanations of linguistic universals and provide a formal connection between constraints on language acquisition and the languages that come to be spoken, suggesting that information transmitted via iterated learning will ultimately come to mirror the minds of the learners.
Rational adaptation under task and processing constraints: Implications for testing theories of cognition and action
 Psychological Review
, 2009
"... The authors assume that individuals adapt rationally to a utility function given constraints imposed by their cognitive architecture and the local task environment. This assumption underlies a new approach to modeling and understanding cognition—cognitively bounded rational analysis—that sharpens th ..."
Abstract

Cited by 39 (13 self)
 Add to MetaCart
(Show Context)
The authors assume that individuals adapt rationally to a utility function given constraints imposed by their cognitive architecture and the local task environment. This assumption underlies a new approach to modeling and understanding cognition—cognitively bounded rational analysis—that sharpens the predictive acuity of general, integrated theories of cognition and action. Such theories provide the necessary computational means to explain the flexible nature of human behavior but in doing so introduce extreme degrees of freedom in accounting for data. The new approach narrows the space of predicted behaviors through analysis of the payoff achieved by alternative strategies, rather than through fitting strategies and theoretical parameters to data. It extends and complements established approaches, including computational cognitive architectures, rational analysis, optimal motor control, bounded rationality, and signal detection theory. The authors illustrate the approach with a reanalysis of an existing account of psychological refractory period (PRP) dualtask performance and the development and analysis of a new theory of ordered dualtask responses. These analyses yield several novel results, including a new understanding of the role of strategic variation in existing accounts of PRP and the first predictive, quantitative account showing how the details of ordered dualtask phenomena emerge from the rational control of a cognitive system subject to the combined constraints of internal variance, motor interference, and a response selection bottleneck.
Environmental Determinants of Lexical Processing Effort
, 2000
"... A central concern of psycholinguistic research is explaining the relative ease or difficulty involved in processing words. In this thesis, we explore the connection between lexical processing effort and measurable properties of the linguistic environment. Distributional information (information abou ..."
Abstract

Cited by 34 (3 self)
 Add to MetaCart
A central concern of psycholinguistic research is explaining the relative ease or difficulty involved in processing words. In this thesis, we explore the connection between lexical processing effort and measurable properties of the linguistic environment. Distributional information (information about a word's contexts of use) is easily extracted from large language corpora in the form of cooccurrence statistics. We claim that such simple distributional statistics can form the basis of a parsimonious model of lexical processing effort.
Intuitive theories as grammars for causal inference
 In A. Gopnik & L. Schulz (Eds.), Causal learning: Psychology, philosophy, and computation
, 2007
"... This chapter considers a set of questions at the interface of the study of intuitive theories, causal knowledge, and problems of inductive inference. By an intuitive theory, we mean a cognitive structure that in some important ways is analogous to a scientific theory. It is becoming broadly recogniz ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
(Show Context)
This chapter considers a set of questions at the interface of the study of intuitive theories, causal knowledge, and problems of inductive inference. By an intuitive theory, we mean a cognitive structure that in some important ways is analogous to a scientific theory. It is becoming broadly recognized that intuitive theories play essential roles in organizing
Reflections on frequency effects in language processing
 Studies in Second Language Acquisition
"... This response addresses the following points raised in the commentaries: (a) complementary learning mechanisms, the distinction between explicit and implicit memory, and the neuroscience of “noticing”; (b) what must and what need not be noticed for learning; (c) when frequency fails to drive learn ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
(Show Context)
This response addresses the following points raised in the commentaries: (a) complementary learning mechanisms, the distinction between explicit and implicit memory, and the neuroscience of “noticing”; (b) what must and what need not be noticed for learning; (c) when frequency fails to drive learning, which addresses factors such as failing to notice cues, perseveration, transfer from L1, developmental readiness, thinking too hard, pedagogical input, and practicing; (d) attention and formfocused instruction; (e) conscious and unconscious knowledge of frequency; (f) sequences of acquisition—from formula, through lowscope pattern, to construction; (g) the Fundamental Difference hypothesis; (h) the blind faith of categorical grammar; (i) Labovian variationist perspectives; (j) parsimony and theory testing; (k) universals and predispositions; and (l) wannacontractions. It concludes by emphasizing that language acquisition is a process of dynamic emergence and that learners ’ language is a product of their history of usage in communicative interaction. What you seize is what you get. There is more to the interpretation of a journal paper than meets the eye, too. The diversity in these commentaries reminds me of Doris Lessing’s (1973) reactions to the range of letters from readers of her Golden Notebook:
The Role of Causality in Judgment Under Uncertainty
"... Leading accounts of judgment under uncertainty evaluate performance within purely statistical frameworks, holding people to the standards of classical Bayesian (Tversky & Kahneman, 1974) or frequentist (Gigerenzer & Hoffrage, 1995) norms. We argue that these frameworks have limited ability t ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
Leading accounts of judgment under uncertainty evaluate performance within purely statistical frameworks, holding people to the standards of classical Bayesian (Tversky & Kahneman, 1974) or frequentist (Gigerenzer & Hoffrage, 1995) norms. We argue that these frameworks have limited ability to explain the success and flexibility of people's realworld judgments, and propose an alternative normative framework based on Bayesian inferences over causal models. Deviations from traditional norms of judgment, such as "baserate neglect", may then be explained in terms of a mismatch between the statistics given to people and the causal models they intuitively construct to support probabilistic reasoning. Four experiments show that when a clear mapping can be established from given statistics to the parameters of an intuitive causal model, people are more likely to use the statistics appropriately, and that when the classical and causal Bayesian norms differ in their prescriptions, people's judgments are more consistent with causal Bayesian norms.