Results 1 -
8 of
8
Horn Clause Contraction Functions
"... In classical, AGM-style belief change, it is assumed that the underlying logic contains classical propositional logic. This is clearly a limiting assumption, particularly in Artificial Intelligence. Consequently there has been recent interest in studying belief change in ap-proaches where the full e ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
In classical, AGM-style belief change, it is assumed that the underlying logic contains classical propositional logic. This is clearly a limiting assumption, particularly in Artificial Intelligence. Consequently there has been recent interest in studying belief change in ap-proaches where the full expressivity of classical propositional logic is not obtained. In this paper we investigate belief contraction in Horn knowledge bases. We point out that the obvious extension to the Horn case, involving Horn remainder sets as a starting point, is problematic. Not only do Horn remainder sets have undesirable properties, but also some desirable Horn contraction functions are not captured by this approach. For Horn belief set contraction, we develop an account in terms of a model-theoretic characterisation involving weak remainder sets. Maxichoice and partial meet Horn contraction is specified, and we show that the problems arising with earlier work are resolved by these approaches. As well, constructions of the specific operators and sets of postulates are provided, and repre-sentation results are obtained. We also examine Horn package contraction, or contraction by a set of formulas. Again, we give a construction and postulate set, linking them via a representation result. Last, we investigate the closely-related notion of forgetting in Horn clauses. This work is arguably interesting since Horn clauses have found widespread use in AI; as well, the results given here may potentially be extended to other areas which make use of Horn-like reasoning, such as logic programming, rule-based systems, and description logics. Finally, since Horn reasoning is weaker than classical reasoning, this work sheds light on the foundations of belief change. 1.
Ontology Repair Through Partial Meet Contraction
"... Abstract The process of building an ontology, be it from scratch or through reuse and combination of other ontologies, is known to be susceptible to modeling errors. Ontology debugging and repair techniques have attracted attention in the last decade due to the popularization of the use of ontologi ..."
Abstract
- Add to MetaCart
Abstract The process of building an ontology, be it from scratch or through reuse and combination of other ontologies, is known to be susceptible to modeling errors. Ontology debugging and repair techniques have attracted attention in the last decade due to the popularization of the use of ontologies written in OWL. Belief Change deals with the problem of removing or adding new information to a knowledge base in a consistent way. In this paper, we look at the belief change operation known as partial meet contraction as a construction for ontology repair. We propose heuristics to improve the performance of such operation and compare them to an existing implementation and approaches based on finding minimal justifications or explanations by means of experiments with automatically generated ontologies and real world ontologies from the BioPortal.
Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence Concept Learning for Cross-Domain Text Classification: A General Probabilistic Framework
"... Cross-domain learning targets at leveraging the knowledge from source domains to train accurate models for the test data from target domains with different but related data distributions. To tackle the challenge of data distribution difference in terms of raw features, previous works proposed to min ..."
Abstract
- Add to MetaCart
Cross-domain learning targets at leveraging the knowledge from source domains to train accurate models for the test data from target domains with different but related data distributions. To tackle the challenge of data distribution difference in terms of raw features, previous works proposed to mine high-level concepts (e.g., word clusters) across data domains, which shows to be more appropriate for classification. However, all these works assume that the same set of concepts are shared in the source and target domains in spite that some distinct concepts may exist only in one of the data domains. Thus, we need a general framework, which can incorporate both shared and distinct concepts, for cross-domain classification. To this end, we develop a probabilistic model, by which both the shared and distinct concepts can be learned by the EM process which optimizes the data likelihood. To validate the effectiveness of this model we intentionally construct the classification tasks where the distinct concepts exist in the data domains. The systematic experiments demonstrate the superiority of our model over all compared baselines, especially on those much more challenging tasks. 1
Extending AGM Contraction to Arbitrary Logics
"... Classic entrenchment-based contraction is not ap-plicable to many useful logics, such as description logics. This is because the semantic construction refers to arbitrary disjunctions of formulas, while many logics do not fully support disjunction. In this paper, we present a new entrenchment-based ..."
Abstract
- Add to MetaCart
Classic entrenchment-based contraction is not ap-plicable to many useful logics, such as description logics. This is because the semantic construction refers to arbitrary disjunctions of formulas, while many logics do not fully support disjunction. In this paper, we present a new entrenchment-based con-traction which does not rely on any logical connec-tives except conjunction. This contraction is appli-cable to most fragments of propositional and first-order logic that support conjunction. We provide a representation theorem for the contraction which shows that it it satisfies all the AGM postulates ex-cept for the controversial Recovery Postulate, and is a natural generalisation of entrenchment-based contraction. 1
Belief Revision in Horn TheoriesI
"... This paper investigates belief revision where the underlying logic is that governing Horn clauses. We show that classical (AGM) belief revision doesn’t immediately generalise to the Horn case. In particular, a standard construction based on a total preorder over possible worlds may violate the accep ..."
Abstract
- Add to MetaCart
(Show Context)
This paper investigates belief revision where the underlying logic is that governing Horn clauses. We show that classical (AGM) belief revision doesn’t immediately generalise to the Horn case. In particular, a standard construction based on a total preorder over possible worlds may violate the accepted (AGM) postulates. Conversely, in the obvious extension to the AGM approach, Horn revision functions are not captured by total preorders over possible worlds. We address these difficulties by introducing two modifications to the AGM approach. First, the semantic construction is restricted to “well behaved ” orderings, what we call Horn compliant orderings. Second, the revision postulates are augmented by an additional postulate. Both restrictions are redundant in the AGM approach, but not in the Horn case. In a representation result we show that the class of revision functions captured by Horn compliant total preorders over possible worlds is precisely that given by the (extended) set of Horn revision postulates. Further, we show that Horn revision is compatible with work in iterated revision and work concerning relevance in revision. We also consider specific revision operators. Arguably this work is interesting for several reasons. It extends AGM revision to inferentially-weaker Horn theories; hence it sheds light on the theoretical underpinnings of belief change, as well as generalising the AGM paradigm. Thus, this work is relevant to revision in areas that employ Horn clauses, such as deductive databases and logic programming, as well as areas in which inference is weaker than classical logic, such as in description logic.
Entrenchment-Based Horn Contraction
"... The AGM framework is the benchmark approach in belief change. Since the framework assumes an underlying logic containing classical Propositional Logic, it can not be applied to systems with a logic weaker than Propositional Logic. To remedy this limitation, several researchers have studied AGM-style ..."
Abstract
- Add to MetaCart
The AGM framework is the benchmark approach in belief change. Since the framework assumes an underlying logic containing classical Propositional Logic, it can not be applied to systems with a logic weaker than Propositional Logic. To remedy this limitation, several researchers have studied AGM-style contraction and revision under the Horn fragment of Propositional Logic (i.e., Horn logic). In this paper, we contribute to this line of research by investigating the Horn version of the AGM entrenchment-based contraction. The study is challenging as the construction of entrenchment-based contraction refers to arbitrary dis-junctions which are not expressible under Horn logic. In order to adapt the construction to Horn logic, we make use of a Horn approximation technique called Horn strengthening. We provide a representation theorem for the newly constructed contraction which we refer to as entrenchment-based Horn contraction. Ideally, contractions defined under Horn logic (i.e., Horn contractions) should be as rational as AGM contraction. We propose the notion of Horn equivalence which intuitively captures the equivalence between Horn contraction and AGM contraction. We show that, under this notion, entrenchment-based Horn contraction is equivalent to a restricted form of entrenchment-based contraction. 1.