Results 1  10
of
23
Maximum entropy and the glasses you are looking through
 IN: PROCEEDINGS OF THE SIXTEENTH ANNUAL CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI2000
, 2000
"... We give an interpretation of the Maximum Entropy (MaxEnt) Principle in gametheoretic terms. Based on this interpretation, we makeaformal distinction between different ways of applying Maximum Entropy distributions. MaxEnt has frequently been criticized on the grounds that it leads to highly represen ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
(Show Context)
We give an interpretation of the Maximum Entropy (MaxEnt) Principle in gametheoretic terms. Based on this interpretation, we makeaformal distinction between different ways of applying Maximum Entropy distributions. MaxEnt has frequently been criticized on the grounds that it leads to highly representation dependent results. Our distinction allows us to avoid this problem in many cases.
Maximum entropy probabilistic logic
, 2002
"... Recent research has shown there are two types of uncertainty that can be expressed in firstorder logic— propositional and statistical uncertainty—and that both types can be represented in terms of probability spaces. However, these efforts have fallen short of providing a general account of how to ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
Recent research has shown there are two types of uncertainty that can be expressed in firstorder logic— propositional and statistical uncertainty—and that both types can be represented in terms of probability spaces. However, these efforts have fallen short of providing a general account of how to design probability measures for these spaces; as a result, we lack a crucial component of any system that reasons under these types of uncertainty. In this paper, we describe an automatic procedure for defining such measures in terms of a probabilistic knowledge base. In particular, we employ the principle of maximum entropy to select measures that are consistent with our knowledge and that make the fewest assumptions in doing so. This approach yields models of firstorder uncertainty that are principled, intuitive, and economical in their representation.
Philosophies of probability: objective Bayesianism and its challenges
 Handbook of the philosophy of mathematics. Elsevier, Amsterdam. Handbook of the Philosophy of Science
, 2004
"... This chapter presents an overview of the major interpretations of probability followed by an outline of the objective Bayesian interpretation and a discussion of the key challenges it faces. ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
(Show Context)
This chapter presents an overview of the major interpretations of probability followed by an outline of the objective Bayesian interpretation and a discussion of the key challenges it faces.
Representation Independence of Nonmonotonic Inference Relations
, 1996
"... A logical concept of representation independence is developed for nonmonotonic logics, including probabilistic inference systems. The general framework then is applied to several nonmonotonic logics, particularly propositional probabilistic logics. For these logics our investigation leads us to modi ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
A logical concept of representation independence is developed for nonmonotonic logics, including probabilistic inference systems. The general framework then is applied to several nonmonotonic logics, particularly propositional probabilistic logics. For these logics our investigation leads us to modified inference rules with greater representation independence.
Updating sets of probabilities
 In Proceedings UAI ’98
, 1998
"... There are several wellknown justifications for conditioning as the appropriate method for updating a single probability measure, given an observation. However, there is a significant body of work arguing for sets of probability measures, rather than single measures, as a more realistic model of unc ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
There are several wellknown justifications for conditioning as the appropriate method for updating a single probability measure, given an observation. However, there is a significant body of work arguing for sets of probability measures, rather than single measures, as a more realistic model of uncertainty. Conditioning still makes sense in this context—we can simply condition each measure in the set individually, then combine the results—and, indeed, it seems to be the preferred updating procedure in the literature. But how justified is conditioning in this richer setting? Here we show, by considering an axiomatic account of conditioning given by van Fraassen, that the singlemeasure and setsofmeasures cases are very different. We show that van Fraassen’s axiomatization for the former case is nowhere near sufficient for updating sets of measures. We give a considerably longer (and not as compelling) list of axioms that together force conditioning in this setting, and describe other update methods that are allowed once any of these axioms is dropped. 1
System jlz  rational default reasoning by minimal ranking constructions.
 Journal of Applied Logic
, 2003
"... ..."
Philosophies of probability
 Handbook of the Philosophy of Mathematics, Volume 4 of the Handbook of the Philosophy of Science
"... This chapter presents an overview of the major interpretations of probability followed by an outline of the objective Bayesian interpretation and a discussion of the key challenges it faces. I discuss the ramifications of interpretations of probability and objective Bayesianism for the philosophy of ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
This chapter presents an overview of the major interpretations of probability followed by an outline of the objective Bayesian interpretation and a discussion of the key challenges it faces. I discuss the ramifications of interpretations of probability and objective Bayesianism for the philosophy of mathematics in general.
Default Reasoning Using Maximum Entropy and Variable Strength Defaults
, 1999
"... The thesis presents a computational model for reasoning with partial information which uses default rules or information about what normally happens. The idea is to provide a means of filling the gaps in an incomplete world view with the most plausible assumptions while allowing for the retraction o ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
The thesis presents a computational model for reasoning with partial information which uses default rules or information about what normally happens. The idea is to provide a means of filling the gaps in an incomplete world view with the most plausible assumptions while allowing for the retraction of conclusions should they subsequently turn out to be incorrect. The model can be used both to reason from a given knowledge base of default rules, and to aid in the construction of such knowledge bases by allowing their designer to compare the consequences of his design with his own default assumptions. The conclusions supported by the proposed model are justified by the use of a probabilistic semantics for default rules in conjunction with the application of a rational means of inference from incomplete knowledgethe principle of maximum entropy (ME). The thesis develops both the theory and algorithms for the ME approach and argues that it should be considered as a general theory of default reasoning. The argument supporting the thesis has two main threads. Firstly, the ME approach is tested on the benchmark examples required of nonmonotonic behaviour, and it is found to handle them appropriately. Moreover, these patterns of commonsense reasoning emerge as consequences of the chosen semantics rather than being design features. It is argued that this makes the ME approach more objective, and its conclusions more justifiable, than other default systems. Secondly, the ME approach is compared with two existing systems: the lexicographic approach (LEX) and system Z + . It is shown that the former can be equated with ME under suitable conditions making it strictly less expressive, while the latter is too crude to perform the subtle resolution of default conflict which the ME...
Measure selection: Notions of rationality and representation independence
 Proceedings of the 14th conference on Uncertainty in Artificial Intelligence
, 1998
"... We take another look at the general problem of selecting a preferred probability measure among those that comply with some given constraints. The dominant role that entropy maximization has obtained in this context is questioned by arguing that the minimum information principle on which it is based ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
We take another look at the general problem of selecting a preferred probability measure among those that comply with some given constraints. The dominant role that entropy maximization has obtained in this context is questioned by arguing that the minimum information principle on which it is based could be supplanted by an at least as plausible “likelihood of evidence ” principle. We then review a method for turning given selection functions into representation independent variants, and discuss the tradeoffs involved in this transformation. setI(J) 1