Results 1 
8 of
8
Burnin, bias, and the rationality of anchoring
"... Bayesian inference provides a unifying framework for learning, reasoning, and decision making. Unfortunately, exact Bayesian inference is intractable in all but the simplest models. Therefore minds and machines have to approximate Bayesian inference. Approximate inference algorithms can achieve a wi ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Bayesian inference provides a unifying framework for learning, reasoning, and decision making. Unfortunately, exact Bayesian inference is intractable in all but the simplest models. Therefore minds and machines have to approximate Bayesian inference. Approximate inference algorithms can achieve a wide range of timeaccuracy tradeoffs, but what is the optimal tradeoff? We investigate timeaccuracy tradeoffs using the MetropolisHastings algorithm as a metaphor for the mind’s inference algorithm(s). We characterize the optimal timeaccuracy tradeoff mathematically in terms of the number of iterations and the resulting bias as functions of time cost, error cost, and the difficulty of the inference problem. We find that reasonably accurate decisions are possible long before the Markov chain has converged to the posterior distribution, i.e. during the period known as “burnin”. Therefore the strategy that is optimal subject to the mind’s bounded processing speed and opportunity costs may perform so few iterations that the resulting samples are biased towards the initial value. The resulting cognitive process model provides a rational basis for the anchoringandadjustment heuristic. The model’s quantitative predictions match published data on anchoring in numerical estimation tasks. In conclusion, resourcerationality–the optimal use of finite computational resources–naturally leads to a biased mind. 1
The origins of probabilistic inference in human infants.
 Cognition,
, 2014
"... a b s t r a c t Reasoning under uncertainty is the bread and butter of everyday life. Many areas of psychology, from cognitive, developmental, social, to clinical, are interested in how individuals make inferences and decisions with incomplete information. The ability to reason under uncertainty ne ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
a b s t r a c t Reasoning under uncertainty is the bread and butter of everyday life. Many areas of psychology, from cognitive, developmental, social, to clinical, are interested in how individuals make inferences and decisions with incomplete information. The ability to reason under uncertainty necessarily involves probability computations, be they exact calculations or estimations. What are the developmental origins of probabilistic reasoning? Recent work has begun to examine whether infants and toddlers can compute probabilities; however, previous experiments have confounded quantity and probabilityin most cases young human learners could have relied on simple comparisons of absolute quantities, as opposed to proportions, to succeed in these tasks. We present four experiments providing evidence that infants younger than 12 months show sensitivity to probabilities based on proportions. Furthermore, infants use this sensitivity to make predictions and fulfill their own desires, providing the first demonstration that even preverbal learners use probabilistic information to navigate the world. These results provide strong evidence for a rich quantitative and statistical reasoning system in infants.
Is that your final answer? The effects of neutral queries on children’s choices
"... Preschoolers often switch a response on repeated questioning, even though no new evidence has been provided (Krahenbuhl, Blades, & Eiser, 2009). Though apparently irrational, this behavior may be understood as children making an inductive inferrence based on their beliefs about whether intial ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Preschoolers often switch a response on repeated questioning, even though no new evidence has been provided (Krahenbuhl, Blades, & Eiser, 2009). Though apparently irrational, this behavior may be understood as children making an inductive inferrence based on their beliefs about whether intial responses were correct and the knowledgeability of the questioner. We present a probabilistic model of how the questioners ’ knowledge and biases to be positive should affect inferences. The model generates the qualitative prediction that an ideal learner should switch responses more often following a “neutral query ” from a knowledgeable questioner than following queries from an ignorant questioner. We test predictions of the model in an experiment. The results show that fouryearold children are sensitive to questioners ’ knowledge when responding to a neutral query, demonstrating more switching behavior when the query is provided by a knowledgeable questioner. We conclude by discussing the practical and theoretical implications for cognitive development. When should a learner abandon their current hypothesis in favor of a new one? It is becoming clear that even preschoolaged children rationally update beliefs and generate new explanations following informative evidence (Gopnik, Gly
Online learning of causal structure in a dynamic game situation
"... Agents situated in a dynamic environment with an initially unknown causal structure, which, moreover, links certain behavioral choices to rewards, must be able to learn such structure incrementally on the fly. We report an experimental study that characterizes human learning in a controlled dynamic ..."
Abstract
 Add to MetaCart
(Show Context)
Agents situated in a dynamic environment with an initially unknown causal structure, which, moreover, links certain behavioral choices to rewards, must be able to learn such structure incrementally on the fly. We report an experimental study that characterizes human learning in a controlled dynamic game environment, and describe a computational model that is capable of similar learning. The model learns by building up a representation of the hypothesized causes and effects, including estimates of the strength of each causal interaction. It is driven initially by simple guesses regarding such interactions, inspired by events occurring in close temporal succession. The model maintains its structure dynamically (including omitting or even reversing the current bestguess dependencies, if warranted by new evidence), and estimates the projected probability of possible outcomes by performing inference on the resulting Bayesian network. The model reproduces the human performance in the present dynamical task.
3 Approximating Bayesian Inference with Monte Carlo Methods
"... 1 Rational Randomness: The role of sampling in an algorithmic account of ..."
“Statistical Learning, Inductive Bias, and Bayesian Inference in Language Acquisition”
"... Language acquisition is a problem of induction: the child learner is faced with a set of specific linguistic examples and must infer some abstract linguistic knowledge that allows the child to generalize beyond the observed data, i.e., to both understand and generate new examples. Many different gen ..."
Abstract
 Add to MetaCart
Language acquisition is a problem of induction: the child learner is faced with a set of specific linguistic examples and must infer some abstract linguistic knowledge that allows the child to generalize beyond the observed data, i.e., to both understand and generate new examples. Many different generalizations are logically possible given any particular set of input data, yet different children within a linguistic community end up with the same adult grammars. This fact suggests that children are biased towards making certain kinds of generalizations rather than others. The nature and extent of children's inductive bias for language is highly controversial, with some researchers assuming that it is detailed and domainspecific (e.g., Chomsky 1973, Baker 1978,
Learning Causal Structure through Local Predictionerror Learning
"... Research on human causal learning has largely focused on strength learning, or on computationallevel theories; there are few formal algorithmic models of how people learn causal structure from covariations. We introduce a model that learns causal structure in a local manner via predictionerror lea ..."
Abstract
 Add to MetaCart
(Show Context)
Research on human causal learning has largely focused on strength learning, or on computationallevel theories; there are few formal algorithmic models of how people learn causal structure from covariations. We introduce a model that learns causal structure in a local manner via predictionerror learning. This local learning is then integrated dynamically into a unified representation of causal structure. The model uses computationally plausible approximations of (locally) rational learning, and so represents a hybrid between the associationist and rational paradigms in causal learning research. We conclude by showing that the model provides a good fit to data from a previous experiment.