Results 1  10
of
13
Decision making under incomplete data using the imprecise Dirichlet model
, 2006
"... The paper presents an efficient solution to decision problems where direct partial information on the distribution of the states of nature is available, either by observations of previous repetitions of the decision problem or by direct expert judgements. To process this information we use a recent ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
The paper presents an efficient solution to decision problems where direct partial information on the distribution of the states of nature is available, either by observations of previous repetitions of the decision problem or by direct expert judgements. To process this information we use a recent generalization of Walley’s imprecise Dirichlet model, allowing us also to handle incomplete observations or imprecise judgements. We derive efficient algorithms and discuss properties of the optimal solutions. In the case of precise data and pure actions we are surprisingly led to a frequencybased variant of the HodgesLehmann criterion, which was developed in classical decision theory as a compromise between Bayesian and minimax procedures.
Missing data as a causal inference problem
 Forthcoming, Proceedings of NIPS
, 2013
"... We address the problem of deciding whether there exists an unbiased estimator of a given relation Q, when data are missing not at random. We employ a formal representation called ‘Missingness Graphs ’ to explicitly portray the causal mechanisms responsible for missingness and to encode dependencies ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
(Show Context)
We address the problem of deciding whether there exists an unbiased estimator of a given relation Q, when data are missing not at random. We employ a formal representation called ‘Missingness Graphs ’ to explicitly portray the causal mechanisms responsible for missingness and to encode dependencies between these mechanisms and the variables being measured. Using this representation, we define the notion of recoverability which ensures that, for a given missingnessgraph G and a given query Q an algorithm exists that produces an unbiased estimate of Q. That is, in the limit of large samples, the algorithm should produce an estimate of Q as if no data were missing. We further present conditions that the graph should satisfy in order for recoverability to hold and devise algorithms to detect the presence of these conditions. 1
Making Decisions Using Sets of Probabilities: Updating, Time Consistency, and Calibration
"... We consider how an agent should update her beliefs when her beliefs are represented by a set P of probability distributions, given that the agent makes decisions using the minimax criterion, perhaps the beststudied and most commonlyused criterion in the literature. We adopt a gametheoretic framew ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
We consider how an agent should update her beliefs when her beliefs are represented by a set P of probability distributions, given that the agent makes decisions using the minimax criterion, perhaps the beststudied and most commonlyused criterion in the literature. We adopt a gametheoretic framework, where the agent plays against a bookie, who chooses some distribution from P. We consider two reasonable games that differ in what the bookie knows when he makes his choice. Anomalies that have been observed before, like time inconsistency, can be understood as arising because different games are being played, against bookies with different information. We characterize the important special cases in which the optimal decision rules according to the minimax criterion amount to either conditioning or simply ignoring the information. Finally, we consider the relationship between updating and calibration when uncertainty is described by sets of probabilities. Our results emphasize the key role of the rectangularity condition of Epstein and Schneider. 1.
Demystifying Dilation
"... Dilation occurs when an interval probability estimate of some event E is properly included in the interval probability estimate of E conditional on every event F of some partition, which means that one’s initial estimate of E becomes less precise no matter how an experiment turns out. Critics mainta ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Dilation occurs when an interval probability estimate of some event E is properly included in the interval probability estimate of E conditional on every event F of some partition, which means that one’s initial estimate of E becomes less precise no matter how an experiment turns out. Critics maintain that dilation is a pathological feature of imprecise probability models, while others have thought the problem is with Bayesian updating. However, two points are often overlooked: (i) knowing that E is stochastically independent of F (for all F in a partition of the underlying state space) is sufficient to avoid dilation, but (ii) stochastic independence is not the only independence concept at play within imprecise probability models. In this paper we give a simple characterization of dilation formulated in terms of deviation from stochastic independence, propose a measure of dilation, and distinguish between proper and improper dilation. Through this we revisit the most sensational examples of dilation, which play up independence between dilator and dilatee, and find the sensationalism undermined by either fallacious reasoning with imprecise probabilities or improperly constructed imprecise probability models. 1 Good Grief! Unlike free advice, which can be a real bore to endure, accepting free information when it is available seems like a Good idea. In fact, it is: I. J. Good (1967) showed that under certain assumptions it pays you, in expectation, to acquire new information when it is free. This Good result reveals why it is rational, in the sense of maximizing expected utility, to use all freely available evidence when estimating a probability. Another Good idea, but not merely a Good idea, is that probability estimates
A GameTheoretic Analysis of Updating Sets of Probabilities
, 2008
"... We consider how an agent should update her uncertainty when it is represented by a set P of probability distributions and the agent observes that a random variable X takes on value x, given that the agent makes decisions using the minimax criterion, perhaps the beststudied and most commonlyused cr ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We consider how an agent should update her uncertainty when it is represented by a set P of probability distributions and the agent observes that a random variable X takes on value x, given that the agent makes decisions using the minimax criterion, perhaps the beststudied and most commonlyused criterion in the literature. We adopt a gametheoretic framework, where the agent plays against a bookie, who chooses some distribution from P. We consider two reasonable games that differ in what the bookie knows when he makes his choice. Anomalies that have been observed before, like time inconsistency, can be understood as arising because different games are being played, against bookies with different information. We characterize the important special cases in which the optimal decision rules according to the minimax criterion amount to either conditioning or simply ignoring the information. Finally, we consider the relationship between conditioning and calibration when uncertainty is described by sets of probabilities.
Forthcoming in Erkenntnis. Penultimate version. Demystifying Dilation
"... Dilation occurs when an interval probability estimate of some event E is properly included in the interval probability estimate of E conditional on every event F of some partition, which means that one’s initial estimate of E becomes less precise no matter how an experiment turns out. Critics mainta ..."
Abstract
 Add to MetaCart
(Show Context)
Dilation occurs when an interval probability estimate of some event E is properly included in the interval probability estimate of E conditional on every event F of some partition, which means that one’s initial estimate of E becomes less precise no matter how an experiment turns out. Critics maintain that dilation is a pathological feature of imprecise probability models, while others have thought the problem is with Bayesian updating. However, two points are often overlooked: (i) knowing that E is stochastically independent of F (for all F in a partition of the underlying state space) is sufficient to avoid dilation, but (ii) stochastic independence is not the only independence concept at play within imprecise probability models. In this paper we give a simple characterization of dilation formulated in terms of deviation from stochastic independence, propose a measure of dilation, and distinguish between proper and improper dilation. Through this we revisit the most sensational examples of dilation, which play up independence between
Abstract
"... The paper presents an efficient solution to decision problems where direct partial information on the distribution of the states of nature is available, either by observations of previous repetitions of the decision problem or by direct expert judgements. To process this information we use a recent ..."
Abstract
 Add to MetaCart
(Show Context)
The paper presents an efficient solution to decision problems where direct partial information on the distribution of the states of nature is available, either by observations of previous repetitions of the decision problem or by direct expert judgements. To process this information we use a recent generalization of Walley’s imprecise Dirichlet model, allowing us also to handle incomplete observations or imprecise judgements, including missing data. We derive efficient algorithms and discuss properties of the optimal solutions with respect to several criteria, including Gammamaximinity and Eadmissibility. In the case of precise data and pure actions the former surprisingly leads us to a frequencybased variant of the HodgesLehmann criterion, which was developed in classical decision theory as a compromise between Bayesian and minimax procedures. We also briefly glance at the representation invariance principle (RIP) in the context of decision making. Key words: Belief functions, coarse data, decision making, Eadmissibility, imprecise Dirichlet model (IDM), imprecise probabilities, incomplete data, interval probability, interval statistical models, missing data, predictive probabilities, setvalued statistical data 1
No Confidence In Confidence Factors Keith Wright, University of Houston – Downtown
"... The solution of most commercial rulebased expert systems consists of two components –the conclusion(s) reached and a calculated measure of belief in the conclusion(s), expressed as a single number. In nondeterministic domains, this number is often the most critical factor in analyzing the solution ..."
Abstract
 Add to MetaCart
(Show Context)
The solution of most commercial rulebased expert systems consists of two components –the conclusion(s) reached and a calculated measure of belief in the conclusion(s), expressed as a single number. In nondeterministic domains, this number is often the most critical factor in analyzing the solution. However, as this paper reviews, a robust calculus for this number has yet to be documented, indicating the need for further research into belief expression, calculation, and result analysis.
Ignoring Data in Court: An Idealized DecisionTheoretic Analysis
"... We give a decisiontheoretic analysis of a central issue regarding statistical evidence in court: are there circumstances under which it is reasonable to ignore the part of the data that gave rise to suspicion in the first place? We heuristically show that under a minimax/robust Bayesian analysis, t ..."
Abstract
 Add to MetaCart
(Show Context)
We give a decisiontheoretic analysis of a central issue regarding statistical evidence in court: are there circumstances under which it is reasonable to ignore the part of the data that gave rise to suspicion in the first place? We heuristically show that under a minimax/robust Bayesian analysis, this part of the data should in fact be treated differently from any additional data one might have. In some situations, even completely ignoring this part of the data can be a minimax optimal strategy.