Results 11  20
of
599
Rational explanation of the selection task
 Psychological Review
, 1996
"... M. Oaksford and N. Chater (O&C; 1994) presented the first quantitative model of P. C. Wason's ( 1966, 1968) selection task in.which performance is rational. J. St B T Evans and D. E. Over (1996) reply that O&C's account is normatively incorrect and cannot model K. N. Kirby's ( ..."
Abstract

Cited by 61 (7 self)
 Add to MetaCart
(Show Context)
M. Oaksford and N. Chater (O&C; 1994) presented the first quantitative model of P. C. Wason's ( 1966, 1968) selection task in.which performance is rational. J. St B T Evans and D. E. Over (1996) reply that O&C's account is normatively incorrect and cannot model K. N. Kirby's (1994b) or P. Pollard and J. St B T Evans's (1983) data. It is argued that an equivalent measure satisfies their normative concerns and that a modification of O&C's model accounts for their empirical concerns. D. Laming (1996) argues that O&C made unjustifiable psychological assumptions and that a "correct" Bayesian analysis agrees with logic. It is argued that O&C's model makes normative and psychological sense and that Laming's analysis is not Bayesian. A. Almor and S. A. Sloman (1996) argue that O&C cannot explain their data. It is argued that Almor and Sloman's data do not bear on O&C's model because they alter the nature of the task. It is concluded that O&C's model remains the most compelling and comprehensive account of the selection task. Research on Wason's (1966, 1968) selection task questions human rationality because performance is not "logically correct?' Recently, Oaksford and Chater (O&C; 1994) provided a rational analysis (Anderson, 1990, 1991) of the selection task that appeared to vindicate human rationality. O&C argued that the selection task is an inductive, rather than a deductive, reasoning task: Participants must assess the truth or falsity of a general rule from specific instances. In particular, participants face a problem of optimal data selection (Lindley, 1956): They must decide which of four cards (p, notp, q, or notq) is likely to provide the most useful data to test a conditional rule,/fp then q. The "logical " solution is to select the p and the notq cards. O&C argued that this solution presupposes falsificationism (Popper, 1959), which argues that only data that can disconfirm, not confirm, hypotheses are of interest. In contrast, O&C's rational analysis uses a Bayesian approach to inductive
Policy Evaluation in Uncertain Economic Environments
 BROOKINGS PAPERS ON ECONOMIC ACTIVITY
, 2003
"... It will be remembered that the seventy translators of the Septuagint were shut up in seventy separate rooms with the Hebrew text and brought out with them, when they emerged, seventy identical translations. Would the same miracle be vouchsafed if seventy multiple correlators were shut up with the sa ..."
Abstract

Cited by 58 (11 self)
 Add to MetaCart
It will be remembered that the seventy translators of the Septuagint were shut up in seventy separate rooms with the Hebrew text and brought out with them, when they emerged, seventy identical translations. Would the same miracle be vouchsafed if seventy multiple correlators were shut up with the same statistical material? And anyhow, I suppose, if each had a different economist perched on his a priori, that would make a difference to the outcome.
Random Worlds and Maximum Entropy
 In Proc. 7th IEEE Symp. on Logic in Computer Science
, 1994
"... Given a knowledge base KB containing firstorder and statistical facts, we consider a principled method, called the randomworlds method, for computing a degree of belief that some formula ' holds given KB . If we are reasoning about a world or system consisting of N individuals, then we can co ..."
Abstract

Cited by 55 (13 self)
 Add to MetaCart
(Show Context)
Given a knowledge base KB containing firstorder and statistical facts, we consider a principled method, called the randomworlds method, for computing a degree of belief that some formula ' holds given KB . If we are reasoning about a world or system consisting of N individuals, then we can consider all possible worlds, or firstorder models, with domain f1; : : : ; Ng that satisfy KB , and compute the fraction of them in which ' is true. We define the degree of belief to be the asymptotic value of this fraction as N grows large. We show that when the vocabulary underlying ' and KB uses constants and unary predicates only, we can naturally associate an entropy with each world. As N grows larger, there are many more worlds with higher entropy. Therefore, we can use a maximumentropy computation to compute the degree of belief. This result is in a similar spirit to previous work in physics and artificial intelligence, but is far more general. Of equal interest to the result itself are...
The plurality of Bayesian measures of confirmation and the problemof measure sensitivity
 Philosophy of Science 66 (Proceedings), S362–S378
, 1999
"... Contemporary Bayesian confirmation theorists measure degree of (incremental) confirmation using a variety of nonequivalent relevance measures. As a result, a great many of the arguments surrounding quantitative Bayesian confirmation theory are implicitly sensitive to choice of measure of confirmati ..."
Abstract

Cited by 54 (13 self)
 Add to MetaCart
Contemporary Bayesian confirmation theorists measure degree of (incremental) confirmation using a variety of nonequivalent relevance measures. As a result, a great many of the arguments surrounding quantitative Bayesian confirmation theory are implicitly sensitive to choice of measure of confirmation. Such arguments are enthymematic, since they tacitly presuppose that certain relevance measures should be used (for various purposes) rather than other relevance measures that have been proposed and defended in the philosophical literature. I present a survey of this pervasive class of Bayesian confirmationtheoretic enthymemes, and a brief analysis of some recent attempts to resolve the problem of measure sensitivity. 1 Preliminaries. 1.1 Terminology, Notation, and Basic Assumptions The present paper is concerned with the degree of incremental confirmation provided by evidential propositions E for hypotheses under test H, givenbackground knowledge K, according to relevance measures of degree of confirmation c. Wesaythatc is a relevance measure of degree of confirmation if and only if c satisfies the following constraints, in cases where E confirms, disconfirms, or is confirmationally irrelevant to H, given background knowledge K. 1
From Statistics to Beliefs
, 1992
"... An intelligent agent uses known facts, including statistical knowledge, to assign degrees of belief to assertions it is uncertain about. We investigate three principled techniques for doing this. All three are applications of the principle of indifference, because they assign equal degree of belief ..."
Abstract

Cited by 48 (13 self)
 Add to MetaCart
An intelligent agent uses known facts, including statistical knowledge, to assign degrees of belief to assertions it is uncertain about. We investigate three principled techniques for doing this. All three are applications of the principle of indifference, because they assign equal degree of belief to all basic "situations " consistent with the knowledge base. They differ because there are competing intuitions about what the basic situations are. Various natural patterns of reasoning, such as the preference for the most specific statistical data available, turn out to follow from some or all of the techniques. This is an improvement over earlier theories, such as work on direct inference and reference classes, which arbitrarily postulate these patterns without offering any deeper explanations or guarantees of consistency. The three methods we investigate have surprising characterizations: there are connections to the principle of maximum entropy, a principle of maximal independence, an...
On the logic and purpose of significance testing.
 Psychological Methods,
, 1997
"... There has been much recent attention given to the problems involved with the traditional approach to null hypothesis significance testing (NHST). Many have suggested that, perhaps, NHST should be abandoned altogether in favor of other bases for conclusions such as confidence intervals and effect si ..."
Abstract

Cited by 43 (1 self)
 Add to MetaCart
(Show Context)
There has been much recent attention given to the problems involved with the traditional approach to null hypothesis significance testing (NHST). Many have suggested that, perhaps, NHST should be abandoned altogether in favor of other bases for conclusions such as confidence intervals and effect size estimates (e.g., The topic of this article is null hypothesis significance testing (NHST;
Ambiguity aversion, comparative ignorance, and decision context
 Organizational Behavior and Human Decision Processes
, 2002
"... People typically find bets less attractive when the probability of receiving a prize is more vague or ambiguous (Ellsberg, 1961). According to Fox and Tversky’s (1995) comparative ignorance hypothesis, ambiguity aversion is driven by the comparison with more familiar events or more knowledgeable ind ..."
Abstract

Cited by 40 (3 self)
 Add to MetaCart
People typically find bets less attractive when the probability of receiving a prize is more vague or ambiguous (Ellsberg, 1961). According to Fox and Tversky’s (1995) comparative ignorance hypothesis, ambiguity aversion is driven by the comparison with more familiar events or more knowledgeable individuals, and diminishes or disappears in the absence of such a comparison. In this paper we emphasize that “comparative ignorance ” refers to the state of mind of the decision maker. We extend the comparative ignorance hypothesis by documenting four new ways in which decision context can affect willingness to act under uncertainty that do not rely on the comparativenoncomparative evaluation paradigm used in previous studies. First, people find uncertain bets more attractive when preceded by questions about less familiar items than when preceded by questions about more familiar items. Second, the preference to bet on more familiar domains
Objective and Subjective Rationality in a Multiple Prior Model
 Daron Acemoglu, Davide Ticchi and Andrea Vindigni, “A Theory of Military Dictatorships
"... The copyright to this Article is held by the Econometric Society. It may be downloaded, printed and reproduced only for educational or research purposes, including use in course packs. No downloading or copying may be done for any commercial purpose without the explicit permission of the Econometric ..."
Abstract

Cited by 38 (8 self)
 Add to MetaCart
The copyright to this Article is held by the Econometric Society. It may be downloaded, printed and reproduced only for educational or research purposes, including use in course packs. No downloading or copying may be done for any commercial purpose without the explicit permission of the Econometric Society. For such commercial purposes contact the Office of the Econometric Society (contact information may be found at the website
Comparative ignorance and the Ellsberg Paradox.
 Journal of Risk and Uncertainty,
, 2001
"... Abstract We investigate the evaluation of known (where probability is known) and unknown (where probability is unknown) bets in comparative and noncomparative contexts. A series of experiments support the finding that ambiguity avoidance persists in both comparative and noncomparative conditions. ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
(Show Context)
Abstract We investigate the evaluation of known (where probability is known) and unknown (where probability is unknown) bets in comparative and noncomparative contexts. A series of experiments support the finding that ambiguity avoidance persists in both comparative and noncomparative conditions. The price difference between known and unknown bets is, however, larger in a comparative evaluation than in separate evaluation. Our results are consistent with Fox and Tversky's (1995) Comparative Ignorance Hypothesis, but we find that the strong result obtained by Fox and Tversky is more fragile and the complete disappearance of ambiguity aversion in noncomparative condition may not be as robust as Fox and Tversky had supposed.