Results 1 
9 of
9
Decision field theory: A dynamiccognitive approach to decision making (Tech
, 1989
"... Decision field theory provides for a mathematical foundation leading to a dynamic, stochastic theory of decision behavior in an uncertain environment. This theory is used to explain (a) violations of stochastic dominance, (b) violations of strong stochastic transitivity, (c) violations of independ ..."
Abstract

Cited by 264 (14 self)
 Add to MetaCart
Decision field theory provides for a mathematical foundation leading to a dynamic, stochastic theory of decision behavior in an uncertain environment. This theory is used to explain (a) violations of stochastic dominance, (b) violations of strong stochastic transitivity, (c) violations of independence between alternatives, (d) serial position effects on preference, (e) speedaccuracy tradeoff effects in decision making, (f) the inverse relation between choice probability and decision time, (g) changes in the direction of preference under time pressure, (h) slower decision times for avoidance as compared with approach conflicts, and (i) preference reversals between choice and selling price measures of preference. The proposed theory is compared with 4 other theories of decision making under uncertainty. Beginning with von Neumann and Morgenstern's (1947) classic expected utility theory, steady progress has been made in the development of formal theories of decision making under risk and uncertainty. For rational theorists, the goal has been to formulate a logical foundation for representing the preferences of an ideal decision maker (e.g., Machina, 1982; Savage,
Decisions from experience and statistical probabilities: why they trigger different choices than a priori probabilities
, 2010
"... The distinction between risk and uncertainty is deeply entrenched in psychologists ’ and economists ’ thinking. Knight (1921), to whom it is frequently attributed, however, went beyond this dichotomy. Within the domain of risk, he set apart a priori and statistical probabilities, a distinction that ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
(Show Context)
The distinction between risk and uncertainty is deeply entrenched in psychologists ’ and economists ’ thinking. Knight (1921), to whom it is frequently attributed, however, went beyond this dichotomy. Within the domain of risk, he set apart a priori and statistical probabilities, a distinction that maps onto that between decisions from description and experience, respectively. We argue this distinction is important because risky choices based on a priori (described) and statistical (experienced) probabilities can substantially diverge. To understand why, we examine various possible contributing factors to the description–experience gap. We find that payoff variability and memory limitations play only a small role in the emergence of the gap. In contrast, the presence of rare events and their representation as either natural frequencies in decisions from experience or singleevent probabilities in decisions from description appear relevant for the gap. Copyright # 2009 John Wiley & Sons, Ltd. key words decisions; experience; information representation; rare events; risk and uncertainty; risky choice; sampling
Computational Rationalization: The Inverse Equilibrium Problem
"... Modeling the purposeful behavior of imperfect agents from a small number of observations is a challenging task. When restricted to the singleagent decisiontheoretic setting, inverse optimal control techniques assume that observed behavior is an approximately optimal solution to an unknown decision ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
Modeling the purposeful behavior of imperfect agents from a small number of observations is a challenging task. When restricted to the singleagent decisiontheoretic setting, inverse optimal control techniques assume that observed behavior is an approximately optimal solution to an unknown decision problem. These techniques learn a utility function that explains the example behavior and can then be used to accurately predict or imitate future behavior in similar observed or unobserved situations. In this work, we consider similar tasks in competitive and cooperative multiagent domains. Here, unlike singleagent settings, a player cannot myopically maximize its reward — it must speculate on how the other agents may act to influence the game’s outcome. Employing the gametheoretic notion of regret and the principle of maximum entropy, we introduce a technique for predicting and generalizing behavior, as well as recovering a reward function in these domains. 1.
unknown title
, 2007
"... www.elsevier.com/locate/artint Multiagent learning and the descriptive value of simple models ..."
Abstract
 Add to MetaCart
(Show Context)
www.elsevier.com/locate/artint Multiagent learning and the descriptive value of simple models
A Contextual Theory of Stochastic Discrete Choice Under Risk by
, 2007
"... contextual theory of stochastic discrete choice under risk ..."
(Show Context)
Cross Cultural Differences in Decisions from Experience:
"... Correspondence concerning this article should be addressed to Davide Marchiori, ..."
Abstract
 Add to MetaCart
(Show Context)
Correspondence concerning this article should be addressed to Davide Marchiori,
CHOICE BEHAVIOR AND REWARD STRUCTUREl
, 1963
"... A model for choice behavior under payoff is presented. Predictions of choice probabilities are evaluated for several experiments involving different event probabilities and payoff levels, two and three choices, and contingent and noncontingent reinforcement. An extension of the model to the predic ..."
Abstract
 Add to MetaCart
A model for choice behavior under payoff is presented. Predictions of choice probabilities are evaluated for several experiments involving different event probabilities and payoff levels, two and three choices, and contingent and noncontingent reinforcement. An extension of the model to the prediction of response time is also considered.