Results 1  10
of
87
Probabilistic Default Reasoning with Conditional Constraints
 ANN. MATH. ARTIF. INTELL
, 2000
"... We present an approach to reasoning from statistical and subjective knowledge, which is based on a combination of probabilistic reasoning from conditional constraints with approaches to default reasoning from conditional knowledge bases. More precisely, we introduce the notions of , lexicographic, ..."
Abstract

Cited by 39 (18 self)
 Add to MetaCart
(Show Context)
We present an approach to reasoning from statistical and subjective knowledge, which is based on a combination of probabilistic reasoning from conditional constraints with approaches to default reasoning from conditional knowledge bases. More precisely, we introduce the notions of , lexicographic, and conditional entailment for conditional constraints, which are probabilistic generalizations of Pearl's entailment in system , Lehmann's lexicographic entailment, and Geffner's conditional entailment, respectively. We show that the new formalisms have nice properties. In particular, they show a similar behavior as referenceclass reasoning in a number of uncontroversial examples. The new formalisms, however, also avoid many drawbacks of referenceclass reasoning. More precisely, they can handle complex scenarios and even purely probabilistic subjective knowledge as input. Moreover, conclusions are drawn in a global way from all the available knowledge as a whole. We then show that the new formalisms also have nice general nonmonotonic properties. In detail, the new notions of , lexicographic, and conditional entailment have similar properties as their classical counterparts. In particular, they all satisfy the rationality postulates proposed by Kraus, Lehmann, and Magidor, and they have some general irrelevance and direct inference properties. Moreover, the new notions of  and lexicographic entailment satisfy the property of rational monotonicity. Furthermore, the new notions of , lexicographic, and conditional entailment are proper generalizations of both their classical counterparts and the classical notion of logical entailment for conditional constraints. Finally, we provide algorithms for reasoning under the new formalisms, and we analyze its computational com...
A Counterexample to Theorems of Cox and Fine
 JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH
, 1999
"... Cox's wellknown theorem justifying the use of probability is shown not to hold infinite domains. The counterexample also suggests that Cox's assumptions are insu cient to prove the result even in infinite domains. The same counterexample is used to disprove a result of Fine on comparative ..."
Abstract

Cited by 37 (2 self)
 Add to MetaCart
Cox's wellknown theorem justifying the use of probability is shown not to hold infinite domains. The counterexample also suggests that Cox's assumptions are insu cient to prove the result even in infinite domains. The same counterexample is used to disprove a result of Fine on comparative conditional probability.
Qualitative decision theory: from Savage’s axioms to nonmonotonic reasoning
 JOURNAL OF THE ACM
, 2002
"... This paper investigates to what extent a purely symbolic approach to decision making under uncertainty is possible, in the scope of Artificial Intelligence. Contrary to classical approaches to decision theory, we try to rank acts without resorting to any numerical representation of utility nor unc ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
(Show Context)
This paper investigates to what extent a purely symbolic approach to decision making under uncertainty is possible, in the scope of Artificial Intelligence. Contrary to classical approaches to decision theory, we try to rank acts without resorting to any numerical representation of utility nor uncertainty, and without using any scale on which both uncertainty and preference could be mapped. Our approach is a variant of Savage's where the setting is finite, and the strict preference on acts is a partial order. It is shown that although many axioms of Savage theory are preserved and despite the intuitive appeal of the ordinal method for constructing a preference over acts, the approach is inconsistent with a probabilistic representation of uncertainty. The latter leads to the kind of paradoxes encountered in the theory of voting. It is shown that the assumption of ordinal invariance enforces a qualitative decision procedure that presupposes a comparative possibility representation of uncertainty, originally due to Lewis, and usual in nonmonotonic reasoning. Our axiomatic investigation thus provides decisiontheoretic foundations to preferential inference of Lehmann and colleagues. However, the obtained decision rules are sometimes either not very decisive or may lead to overconfident decisions, although their basic principles look sound. This paper points out some limitations of purely ordinal approaches to Savagelike decision making under uncertainty, in perfect analogy with similar difficulties in voting theory.
Nonmonotonic Logics and Semantics
 Journal of Logic and Computation
, 2001
"... Tarski gave a general semantics for deductive reasoning: a formula a may be deduced from a set A of formulas i a holds in all models in which each of the elements of A holds. A more liberal semantics has been considered: a formula a may be deduced from a set A of formulas i a holds in all of th ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
(Show Context)
Tarski gave a general semantics for deductive reasoning: a formula a may be deduced from a set A of formulas i a holds in all models in which each of the elements of A holds. A more liberal semantics has been considered: a formula a may be deduced from a set A of formulas i a holds in all of the preferred models in which all the elements of A hold. Shoham proposed that the notion of preferred models be de ned by a partial ordering on the models of the underlying language. A more general semantics is described in this paper, based on a set of natural properties of choice functions. This semantics is here shown to be equivalent to a semantics based on comparing the relative importance of sets of models, by what amounts to a qualitative probability measure. The consequence operations de ned by the equivalent semantics are then characterized by a weakening of Tarski's properties in which the monotonicity requirement is replaced by three weaker conditions. Classical propositional connectives are characterized by natural introductionelimination rules in a nonmonotonic setting. Even in the nonmonotonic setting, one obtains classical propositional logic, thus showing that monotonicity is not required to justify classical propositional connectives.
Modeling Belief in Dynamic Systems. Part II: Revision and Update
 Journal of A.I. Research
, 1999
"... The study of belief change has been an active area in philosophy and AI. In recent years two special cases of belief change, belief revision and belief update, have been studied in detail. In a companion paper [Friedman and Halpern 1997a], we introduce a new framework to model belief change. This fr ..."
Abstract

Cited by 27 (7 self)
 Add to MetaCart
(Show Context)
The study of belief change has been an active area in philosophy and AI. In recent years two special cases of belief change, belief revision and belief update, have been studied in detail. In a companion paper [Friedman and Halpern 1997a], we introduce a new framework to model belief change. This framework combines temporal and epistemic modalities with a notion of plausibility, allowing us to examine the change of beliefs over time. In this paper, we show how belief revision and belief update can be captured in our framework. This allows us to compare the assumptions made by each method, and to better understand the principles underlying them. In particular, it shows that Katsuno and Mendelzon's notion of belief update [Katsuno and Mendelzon 1991a] depends on several strong assumptions that may limit its applicability in artificial intelligence. Finally, our analysis allow us to identify a notion of minimal change that underlies a broad range of belief change operations including revi...
Modeling Belief in Dynamic Systems. Part I: Foundations
 Artificial Intelligence
, 1997
"... Belief change is a fundamental problem in AI: Agents constantly have to update their beliefs to accommodate new observations. In recent years, there has been much work on axiomatic characterizations of belief change. We claim that a better understanding of belief change can be gained from examining ..."
Abstract

Cited by 25 (11 self)
 Add to MetaCart
(Show Context)
Belief change is a fundamental problem in AI: Agents constantly have to update their beliefs to accommodate new observations. In recent years, there has been much work on axiomatic characterizations of belief change. We claim that a better understanding of belief change can be gained from examining appropriate semantic models. In this paper we propose a general framework in which to model belief change. We begin by defining belief in terms of knowledge and plausibility: an agent believes OE if he knows that OE is more plausible than :OE. We then consider some properties defining the interaction between knowledge and plausibility, and show how these properties affect the properties of belief. In particular, we show that by assuming two of the most natural properties, belief becomes a KD45 operator. Finally, we add time to the picture. This gives us a framework in which we can talk about knowledge, plausibility (and hence belief), and time, which extends the framework of Halpern and Fagi...
Weak nonmonotonic probabilistic logics
, 2004
"... Towards probabilistic formalisms for resolving local inconsistencies under modeltheoretic probabilistic entailment, we present probabilistic generalizations of Pearl’s entailment in System Z and Lehmann’s lexicographic entailment. We then analyze the nonmonotonic and semantic properties of the new ..."
Abstract

Cited by 24 (6 self)
 Add to MetaCart
Towards probabilistic formalisms for resolving local inconsistencies under modeltheoretic probabilistic entailment, we present probabilistic generalizations of Pearl’s entailment in System Z and Lehmann’s lexicographic entailment. We then analyze the nonmonotonic and semantic properties of the new notions of entailment. In particular, we show that they satisfy the rationality postulates of System P and the property of Rational Monotonicity. Moreover, we show that modeltheoretic probabilistic entailment is stronger than the new notion of lexicographic entailment, which in turn is stronger than the new notion of entailment in System Z. As an important feature of the new notions of entailment in System Z and lexicographic entailment, we show that they coincide with modeltheoretic probabilistic entailment whenever there are no local inconsistencies. We also show that the new notions of entailment in System Z and lexicographic entailment are proper generalizations of their classical counterparts. Finally, we present algorithms for reasoning under the new formalisms, and we give a precise picture of its computational complexity.
A Sequent Calculus and a Theorem Prover for Standard Conditional Logics
, 2007
"... ..."
(Show Context)
Default Reasoning from Conditional Knowledge Bases: Complexity and Tractable Cases
 Artif. Intell
, 2000
"... Conditional knowledge bases have been proposed as belief bases that include defeasible rules (also called defaults) of the form " ! ", which informally read as "generally, if then ." Such rules may have exceptions, which can be handled in different ways. A number of entailment ..."
Abstract

Cited by 19 (11 self)
 Add to MetaCart
(Show Context)
Conditional knowledge bases have been proposed as belief bases that include defeasible rules (also called defaults) of the form " ! ", which informally read as "generally, if then ." Such rules may have exceptions, which can be handled in different ways. A number of entailment semantics for conditional knowledge bases have been proposed in the literature. However, while the semantic properties and interrelationships of these formalisms are quite well understood, about their computational properties only partial results are known so far. In this paper, we fill these gaps and first draw a precise picture of the complexity of default reasoning from conditional knowledge bases: Given a conditional knowledge base KB and a default ! , does KB entail ! ? We classify the complexity of this problem for a number of wellknown approaches (including Goldszmidt et al.'s maximum entropy approach and Geffner's conditional entailment), where we consider the general propositional case as well as natural syntactic restrictions (in particular, to Horn and literalHorn conditional knowledge bases). As we show, the more sophisticated semantics for conditional knowledge bases are plagued with intractability in all these fragments. We thus explore cases in which these semantics are tractable, and find that most of them enjoy this property on feedbackfree Horn conditional knowledge bases, which constitute a new, meaningful class of conditional knowledge bases. Furthermore, we generalize previous tractability results from Horn to qHorn conditional knowledge bases, which allow for a limited use of disjunction. Our results complement and extend previous results, and contribute in refining the tractability/intractability frontier of default reasoning from conditional know...
Settheoretic completeness for epistemic and conditional logic
 Annals of Mathematics and Artificial Intelligence
, 1999
"... The standard approach to logic in the literature in philosophy and mathematics, which has also been adopted in computer science, is to define a language (the syntax), an appropriate class of models together with an interpretation of formulas in the language (the semantics), a collection of axioms an ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
(Show Context)
The standard approach to logic in the literature in philosophy and mathematics, which has also been adopted in computer science, is to define a language (the syntax), an appropriate class of models together with an interpretation of formulas in the language (the semantics), a collection of axioms and rules of inference characterizing reasoning (the proof theory), and then relate the proof theory to the semantics via soundness and completeness results. Here we consider an approach that is more common in the economics literature, which works purely at the semantic, settheoretic level. We provide settheoretic completeness results for a number of epistemic and conditional logics, and contrast the expressive power of the syntactic and settheoretic approaches.