Results 1 -
6 of
6
Beyond theory and data in preference modeling: Bringing humans into the loop
- In Proceedings of the 4th International Conference on Algorithmic Decision Theory (ADT
, 2015
"... Abstract. Many mathematical frameworks aim at modeling human preferences, employing a number of methods including utility functions, qualitative preference statements, constraint optimization, and logic for-malisms. The choice of one model over another is usually based on the assumption that it can ..."
Abstract
-
Cited by 3 (3 self)
- Add to MetaCart
(Show Context)
Abstract. Many mathematical frameworks aim at modeling human preferences, employing a number of methods including utility functions, qualitative preference statements, constraint optimization, and logic for-malisms. The choice of one model over another is usually based on the assumption that it can accurately describe the preferences of humans or other subjects/processes in the considered setting and is computa-tionally tractable. Verification of these preference models often leverages some form of real life or domain specific data; demonstrating the models can predict the series of choices observed in the past. We argue that this is not enough: to evaluate a preference model, humans must be brought into the loop. Human experiments in controlled environments are needed to avoid common pitfalls associated with exclusively using prior data in-cluding introducing bias in the attempt to clean the data, mistaking correlation for causality, or testing data in a context that is different from the one where the data were produced. Human experiments need to be done carefully and we advocate a multi-disciplinary research en-vironment that includes experimental psychologists and AI researchers. We argue that experiments should be used to validate models. We detail the design of an experiment in order to highlight some of the signif-icant computational, conceptual, ethical, mathematical, psychological, and statistical hurdles to testing whether decision makers ’ preferences are consistent with a particular mathematical model of preferences. 1
COMMENT Testing Mixture Models of Transitive Preference: Comment on
"... This article contrasts 2 approaches to analyzing transitivity of preference and other behavioral properties in choice data. The approach of Regenwetter, Dana, and Davis-Stober (2011) assumes that on each choice, a decision maker samples randomly from a mixture of preference orders to determine wheth ..."
Abstract
- Add to MetaCart
(Show Context)
This article contrasts 2 approaches to analyzing transitivity of preference and other behavioral properties in choice data. The approach of Regenwetter, Dana, and Davis-Stober (2011) assumes that on each choice, a decision maker samples randomly from a mixture of preference orders to determine whether A is preferred to B. In contrast, Birnbaum and Gutierrez (2007) assumed that within each block of trials, the decision maker has a true set of preferences and that random errors generate variability of response. In this latter approach, preferences are allowed to differ between people; within-person, they might differ between repetition blocks. Both approaches allow mixtures of preferences, both assume a type of independence, and both yield statistical tests. They differ with respect to the locus of independence in the data. The approaches also differ in the criterion for assessing the success of the models. Regenwetter et al. fitted only marginal choice proportions and assumed that choices are independent, which means that a mixture cannot be identified from the data. Birnbaum and Gutierrez fitted choice combinations with replications; their approach allows estimation of the probabilities in the mixture. It is suggested that researchers should separate tests of the stochastic model from the test of transitivity. Evidence testing independence and stationarity assumptions is presented. Available data appear to fit the assumption that errors are independent better than they fit the assumption that choices are independent.
Testing theories of risky decision making via critical tests
, 2011
"... Whereas some people regard models of risky decision making as if they were statistical summaries of data collected for some other purpose, I think of models as theories that can be tested by experiments. I argue that comparing theories by means of global indices of fit is not a fruitful way to evalu ..."
Abstract
- Add to MetaCart
Whereas some people regard models of risky decision making as if they were statistical summaries of data collected for some other purpose, I think of models as theories that can be tested by experiments. I argue that comparing theories by means of global indices of fit is not a fruitful way to evaluate theories of risky decision making. I argue instead for experimental science. That is, test critical properties, which are theorems of one model that are violated by a rival model. Recent studies illustrate how conclusions based on fit can be overturned by critical tests. Elsewhere, I have warned against drawing
Commentary: “Neural signatures of
, 2015
"... “Neural signatures of intransitive preferences.” Front. Hum. Neurosci. 9:509. doi: 10.3389/fnhum.2015.00509 ..."
Abstract
- Add to MetaCart
(Show Context)
“Neural signatures of intransitive preferences.” Front. Hum. Neurosci. 9:509. doi: 10.3389/fnhum.2015.00509
The default pull: An experimental demonstration of subtle default effects
"... on preferences ..."
(Show Context)
Reply: Birnbaum’s (2012) statistical tests of independence have
"... unknown Type-I error rates and do not replicate within participant ..."