Results 11  20
of
276
On properties of update sequences based on causal rejection
 JOURNAL OF THE THEORY AND PRACTICE OF LOGIC PROGRAMMING
, 2002
"... In this paper, we consider an approach to update nonmonotonic knowledge bases represented as extended logic programs under the answer set semantics. In this approach, new information is incorporated into the current knowledge base subject to a causal rejection principle, which enforces that, in case ..."
Abstract

Cited by 76 (13 self)
 Add to MetaCart
In this paper, we consider an approach to update nonmonotonic knowledge bases represented as extended logic programs under the answer set semantics. In this approach, new information is incorporated into the current knowledge base subject to a causal rejection principle, which enforces that, in case of conflicts between rules, more recent rules are preferred and older rules are overridden. Such a rejection principle is also exploited in other approaches to update logic programs, notably in the method of dynamic logic programming, due to Alferes et al. One of the central issues of this paper is a thorough analysis of various properties of the current approach, in order to get a better understanding of the inherent causal rejection principle. For this purpose, we review postulates and principles for update and revision operators which have been proposed in the area of theory change and nonmonotonic reasoning. Moreover, some new properties for approaches to updating logic programs are considered as well. Like related update approaches, the current semantics does not incorporate a notion of minimality of change, so we consider refinements of the semantics in this direction. As well, we investigate the relationship of our approach to others in more detail. In particular, we show that the current approach is semantically equivalent to inheritance programs, which have been independently defined by Buccafurri et al., and that it coincides with certain classes of dynamic logic programs. In view of this analysis, most of our results about properties of the causal rejection principle apply to each of these approaches as well. Finally, we also deal with computational issues. Besides a discussion on the computational complexity of our approach, we outline how the update semantics and its refinements can be directly implemented on top of existing logic programming systems. In the present case, we implemented the update approach using the logic programming system DLV.
Modeling Agents as Qualitative Decision Makers
 Artificial Intelligence
, 1997
"... We investigate the semantic foundations of a method for modeling agents as entities with a mental state which was suggested by McCarthy and by Newell. Our goals are to formalize this modeling approach and its semantics, to understand the theoretical and practical issues that it raises, and to addres ..."
Abstract

Cited by 51 (0 self)
 Add to MetaCart
(Show Context)
We investigate the semantic foundations of a method for modeling agents as entities with a mental state which was suggested by McCarthy and by Newell. Our goals are to formalize this modeling approach and its semantics, to understand the theoretical and practical issues that it raises, and to address some of them. In particular, this requires specifying the model's parameters and how these parameters are to be assigned (i.e., their grounding). We propose a basic model in which the agent is viewed as a qualitative decision maker with beliefs, preferences, and decision strategy; and we show how these components would determine the agent's behavior. We ground this model in the agent's interaction with the world, namely, in its actions. This is done by viewing model construction as a constraint satisfaction problem in which we search for a model consistent with the agent's behavior and with our general background knowledge. In addition, we investigate the conditions under which a mental st...
Distance Semantics for Belief Revision
, 1999
"... A vast and interesting family of natural semantics for belief revision is defined. Suppose one is given a distance d between any two models. One may then define the revision of a theory K by a formula ff as the theory defined by the set of all those models of ff that are closest, by d, to the set ..."
Abstract

Cited by 49 (2 self)
 Add to MetaCart
A vast and interesting family of natural semantics for belief revision is defined. Suppose one is given a distance d between any two models. One may then define the revision of a theory K by a formula ff as the theory defined by the set of all those models of ff that are closest, by d, to the set of models of K. This family is characterized by a set of rationality postulates that extends the AGM postulates. The new postulates describe properties of iterated revisions. 1 Introduction 1.1 Overview and related work The aim of this paper is to investigate semantics and logical properties of theory revisions based on an underlying notion of distance between individual models. In many situations it is indeed reasonable to assume that the agent has some natural way to evaluate the distance between any two models of the logical language of interest. The distance between model m and model m 0 is a measure of how far m 0 appears to be from the point of view of m. This distance may me...
Statistical Foundations for Default Reasoning
, 1993
"... We describe a new approach to default reasoning, based on a principle of indifference among possible worlds. We interpret default rules as extreme statistical statements, thus obtaining a knowledge base KB comprised of statistical and firstorder statements. We then assign equal probability to all w ..."
Abstract

Cited by 49 (7 self)
 Add to MetaCart
We describe a new approach to default reasoning, based on a principle of indifference among possible worlds. We interpret default rules as extreme statistical statements, thus obtaining a knowledge base KB comprised of statistical and firstorder statements. We then assign equal probability to all worlds consistent with KB in order to assign a degree of belief to a statement '. The degree of belief can be used to decide whether to defeasibly conclude '. Various natural patterns of reasoning, such as a preference for more specific defaults, indifference to irrelevant information, and the ability to combine independent pieces of evidence, turn out to follow naturally from this technique. Furthermore, our approach is not restricted to default reasoning; it supports a spectrum of reasoning, from quantitative to qualitative. It is also related to other systems for default reasoning. In particular, we show that the work of [ Goldszmidt et al., 1990 ] , which applies maximum entropy ideas t...
How to Infer from Inconsistent Beliefs without Revising?
 Proc. IJCAI'95
, 1995
"... This paper investigates several methods for coping with inconsistency caused by multiple source information, by introducing suitable consequence relations capable of inferring nontrivial conclusions from an inconsistent stratified knowledge base. Some of these methods presuppose a revision step, na ..."
Abstract

Cited by 46 (5 self)
 Add to MetaCart
This paper investigates several methods for coping with inconsistency caused by multiple source information, by introducing suitable consequence relations capable of inferring nontrivial conclusions from an inconsistent stratified knowledge base. Some of these methods presuppose a revision step, namely a selection of one or several consistent subsets of formulas, and then classical inference is used for inferring from these subsets. Two alternative methods that do not require any revision step are studied: inference based on arguments, and a new approach called safely supported inference, where inconsistency is kept local. These two last methods look suitable when the inconsistency is due to the presence of several sources of information. The paper offers a comparative study of the various inference modes under inconsistency. 1 Introduction Inconsistency can be encountered in different reasoning tasks, in particular:  when reasoning with exceptiontolerant generic knowledge, where ...
Conditional Objects as Nonmonotonic Consequence Relationships.
 IEEE Trans. Syst. Man Cybern.
, 1994
"... ..."
(Show Context)
Probabilistic Default Reasoning with Conditional Constraints
 ANN. MATH. ARTIF. INTELL
, 2000
"... We present an approach to reasoning from statistical and subjective knowledge, which is based on a combination of probabilistic reasoning from conditional constraints with approaches to default reasoning from conditional knowledge bases. More precisely, we introduce the notions of , lexicographic, ..."
Abstract

Cited by 39 (18 self)
 Add to MetaCart
(Show Context)
We present an approach to reasoning from statistical and subjective knowledge, which is based on a combination of probabilistic reasoning from conditional constraints with approaches to default reasoning from conditional knowledge bases. More precisely, we introduce the notions of , lexicographic, and conditional entailment for conditional constraints, which are probabilistic generalizations of Pearl's entailment in system , Lehmann's lexicographic entailment, and Geffner's conditional entailment, respectively. We show that the new formalisms have nice properties. In particular, they show a similar behavior as referenceclass reasoning in a number of uncontroversial examples. The new formalisms, however, also avoid many drawbacks of referenceclass reasoning. More precisely, they can handle complex scenarios and even purely probabilistic subjective knowledge as input. Moreover, conclusions are drawn in a global way from all the available knowledge as a whole. We then show that the new formalisms also have nice general nonmonotonic properties. In detail, the new notions of , lexicographic, and conditional entailment have similar properties as their classical counterparts. In particular, they all satisfy the rationality postulates proposed by Kraus, Lehmann, and Magidor, and they have some general irrelevance and direct inference properties. Moreover, the new notions of  and lexicographic entailment satisfy the property of rational monotonicity. Furthermore, the new notions of , lexicographic, and conditional entailment are proper generalizations of both their classical counterparts and the classical notion of logical entailment for conditional constraints. Finally, we provide algorithms for reasoning under the new formalisms, and we analyze its computational com...
Approaches to measuring inconsistent information
 Inconsistency Tolerance. Volume 3300 of Lecture Notes in Computer Science
, 2005
"... Abstract. Measures of quantity of information have been studied extensively for more than fifty years. The seminal work on information theory is by Shannon [67]. This work, based on probability theory, can be used in a logical setting when the worlds are the possible events. This work is also the ba ..."
Abstract

Cited by 36 (9 self)
 Add to MetaCart
(Show Context)
Abstract. Measures of quantity of information have been studied extensively for more than fifty years. The seminal work on information theory is by Shannon [67]. This work, based on probability theory, can be used in a logical setting when the worlds are the possible events. This work is also the basis of Lozinskii’s work [48] for defining the quantity of information of a formula (or knowledgebase) in propositional logic. But this definition is not suitable when the knowledgebase is inconsistent. In this case, it has no classical model, so we have no “event ” to count. This is a shortcoming since in practical applications (e.g. databases) it often happens that the knowledgebase is not consistent. And it is definitely not true that all inconsistent knowledgebases contain the same (null) amount of information, as given by the “classical information theory”. As explored for several years in the paraconsistent logic community, two inconsistent knowledgebases can lead to very different conclusions, showing that they do not convey the same information. There has been some
Representing partial ignorance
 IEEE Trans. on Systems, Man and Cybernetics
, 1996
"... Ignorance is precious, for once lost it can never be regained. This paper advocates the use of nonpurely probabilistic approaches to higherorder uncertainty. One of the major arguments of Bayesian probability proponents is that representing uncertainty is always decisiondriven and as a consequenc ..."
Abstract

Cited by 36 (11 self)
 Add to MetaCart
(Show Context)
Ignorance is precious, for once lost it can never be regained. This paper advocates the use of nonpurely probabilistic approaches to higherorder uncertainty. One of the major arguments of Bayesian probability proponents is that representing uncertainty is always decisiondriven and as a consequence, uncertainty should be represented by probability. Here we argue that representing partial ignorance is not always decisiondriven. Other reasoning tasks such as belief revision for instance are more naturally carried out at the purely cognitive level. Conceiving knowledge representation and decisionmaking as separate concerns opens the way to nonpurely probabilistic representations of incomplete knowledge. It is pointed out that within a numerical framework, two numbers are needed to account for partial ignorance about events, because on top of truth and falsity, the state of total ignorance must be encoded independently of the number of underlying alternatives. The paper also points out that it is consistent to accept a Bayesian view of decisionmaking and a nonBayesian view of knowledge representation because it is possible to map nonprobabilistic degrees of belief to betting probabilities when needed. Conditioning rules in nonBayesian settings are reviewed,