Results 1 - 10
of
185
A Knowledge Compilation Map
- Journal of Artificial Intelligence Research
, 2002
"... We propose a perspective on knowledge compilation which calls for analyzing different compilation approaches according to two key dimensions: the succinctness of the target compilation language, and the class of queries and transformations that the language supports in polytime. ..."
Abstract
-
Cited by 219 (33 self)
- Add to MetaCart
(Show Context)
We propose a perspective on knowledge compilation which calls for analyzing different compilation approaches according to two key dimensions: the succinctness of the target compilation language, and the class of queries and transformations that the language supports in polytime.
Decomposable negation normal form
- JOURNAL OF THE ACM
, 2001
"... Knowledge compilation has been emerging recently as a new direction of research for dealing with the computational intractability of general propositional reasoning. According to this approach, the reasoning process is split into two phases: an off-line compilation phase and an online query-answer ..."
Abstract
-
Cited by 128 (17 self)
- Add to MetaCart
(Show Context)
Knowledge compilation has been emerging recently as a new direction of research for dealing with the computational intractability of general propositional reasoning. According to this approach, the reasoning process is split into two phases: an off-line compilation phase and an online query-answering phase. In the off-line phase, the propositional theory is compiled into some target language, which is typically a tractable one. In the on-line phase, the compiled target is used to efficiently answer a (potentially) exponential number of queries. The main motivation behind knowledge compilation is to push as much of the computational overhead as possible into the offline phase, in order to amortize that overhead over all on-line queries. Another motivation behind compilation is to produce very simple on-line reasoning systems, which can be embedded cost-effectively into primitive computational platforms, such as those found in consumer electronics. One of the key aspects of any compilation approach is the target language into which the propositional theory is compiled. Previous target languages included Horn theories, prime implicates/implicants and ordered binary decision diagrams (OBDDs). We propose in this paper a new target compilation language, known as decomposable negation normal form (DNNF), and present a number of its properties that make it of interest to the broad community. Specifically, we
A Survey on Knowledge Compilation
, 1998
"... this paper we survey recent results in knowledge compilation of propositional knowledge bases. We first define and limit the scope of such a technique, then we survey exact and approximate knowledge compilation methods. We include a discussion of compilation for non-monotonic knowledge bases. Keywor ..."
Abstract
-
Cited by 119 (4 self)
- Add to MetaCart
this paper we survey recent results in knowledge compilation of propositional knowledge bases. We first define and limit the scope of such a technique, then we survey exact and approximate knowledge compilation methods. We include a discussion of compilation for non-monotonic knowledge bases. Keywords: Knowledge Representation, Efficiency of Reasoning
The comparative linguistics of knowledge representation
- In Proc. of IJCAI’95
, 1995
"... We develop a methodology for comparing knowledge representation formalisms in terms of their "representational succinctness, " that is, their ability to express knowledge situations relatively efficiently. We use this framework for comparing many important formalisms for knowledge base rep ..."
Abstract
-
Cited by 70 (2 self)
- Add to MetaCart
We develop a methodology for comparing knowledge representation formalisms in terms of their "representational succinctness, " that is, their ability to express knowledge situations relatively efficiently. We use this framework for comparing many important formalisms for knowledge base representation: propositional logic, default logic, circumscription, and model preference defaults; and, at a lower level, Horn formulas, characteristic models, decision trees, disjunctive normal form, and conjunctive normal form. We also show that adding new variables improves the effective expressibility of certain knowledge representation formalisms. 1
Learning to reason
- Journal of the ACM
, 1994
"... Abstract. We introduce a new framework for the study of reasoning. The Learning (in order) to Reason approach developed here views learning as an integral part of the inference process, and suggests that learning and reasoning should be studied together. The Learning to Reason framework combines the ..."
Abstract
-
Cited by 70 (25 self)
- Add to MetaCart
(Show Context)
Abstract. We introduce a new framework for the study of reasoning. The Learning (in order) to Reason approach developed here views learning as an integral part of the inference process, and suggests that learning and reasoning should be studied together. The Learning to Reason framework combines the interfaces to the world used by known learning models with the reasoning task and a performance criterion suitable for it. In this framework, the intelligent agent is given access to its favorite learning interface, and is also given a grace period in which it can interact with this interface and construct a representation KB of the world W. The reasoning performance is measured only after this period, when the agent is presented with queries � from some query language, relevant to the world, and has to answer whether W implies �. The approach is meant to overcome the main computational difficulties in the traditional treatment of reasoning which stem from its separation from the “world”. Since the agent interacts with the world when constructing its knowledge representation it can choose a representation that is useful for the task at hand. Moreover, we can now make explicit the dependence of the reasoning performance on the environment the agent interacts with. We show how previous results from learning theory and reasoning fit into this framework and
E.: Approximating owl-dl ontologies
- In: Proc. of AAAI 2007 (In Press). (2007
, 2006
"... Abstract. In this poster, we propose to recast the idea of knowledge compilation into approximating OWL DL ontologies with DL-Lite ontologies, against which query answering has only polynomial data complexity. We identify a useful category of queries for which our approach also guarantees completene ..."
Abstract
-
Cited by 47 (15 self)
- Add to MetaCart
(Show Context)
Abstract. In this poster, we propose to recast the idea of knowledge compilation into approximating OWL DL ontologies with DL-Lite ontologies, against which query answering has only polynomial data complexity. We identify a useful category of queries for which our approach also guarantees completeness. Furthermore, we paper report on the implementation of our approach in the ONTOSEARCH2 system. 1
Preprocessing of Intractable Problems
- Information and Computation
, 1997
"... Some computationally hard problems --e.g., deduction in logical knowledge bases-- are such that part of an instance is known well before the rest of it, and remains the same for several subsequent instances of the problem. In these cases, it is meaningful to preprocess off-line this known part so as ..."
Abstract
-
Cited by 46 (15 self)
- Add to MetaCart
Some computationally hard problems --e.g., deduction in logical knowledge bases-- are such that part of an instance is known well before the rest of it, and remains the same for several subsequent instances of the problem. In these cases, it is meaningful to preprocess off-line this known part so as to simplify the remaining on-line problem. In this paper we investigate such a technique in the context of intractable, i.e., NP-hard, problems. Recent results in the literature show that not all NP-hard problems behave in the same way: for some of them preprocessing yields polynomial-time on-line simplified problems (we call them compilable), while for other ones there is strong evidence that this should not happen. Our primary goal is to provide a sound methodology that can be used either to prove or disprove that a problem is compilable. To this end, we define new models of computation, complexity classes, and reductions. We find complete problems for such classes, completeness meaning...
Compiling Knowledge into Decomposable Negation Normal Form
, 1999
"... We propose a method for compiling propositional theories into a new tractable form that we refer to as decomposable negation normal form (DNNF). We show a number of results about our compilation approach. First, we show that every propositional theory can be compiled into DNNF and present an algorit ..."
Abstract
-
Cited by 37 (11 self)
- Add to MetaCart
We propose a method for compiling propositional theories into a new tractable form that we refer to as decomposable negation normal form (DNNF). We show a number of results about our compilation approach. First, we show that every propositional theory can be compiled into DNNF and present an algorithm to this effect. Second, we show that if a clausal form has a bounded treewidth, then its DNNF compilation has a linear size and can be computed in linear time — treewidth is a graphtheoretic parameter which measures the connectivity of the clausal form. Third, we show that once a propositional theory is compiled into DNNF, a number of reasoning tasks, such as satisfiability and forgetting, can be performed in linear time. Finally, we propose two techniques for approximating the DNNF compilation of a theory when the size of such compilation is too large to be practical. One of the techniques generates a sound but incomplete compilation, while the other generates a complete but unsound compilation. Together, these approximations bound the exact compilation from below and above in terms for their ability to answer queries.
Horn Approximations of Empirical Data
- Artificial Intelligence
, 1995
"... Formal AI systems traditionally represent knowledge using logical formulas. Sometimes, however, a model-based representation is more compact and enables faster reasoning than the corresponding formula-based representation. The central idea behind our work is to represent a large set of models by a s ..."
Abstract
-
Cited by 37 (2 self)
- Add to MetaCart
(Show Context)
Formal AI systems traditionally represent knowledge using logical formulas. Sometimes, however, a model-based representation is more compact and enables faster reasoning than the corresponding formula-based representation. The central idea behind our work is to represent a large set of models by a subset of characteristic models. More specifically, we examine model-based representations of Horn theories, and show that there are large Horn theories that can be exactly represented by an exponentially smaller set of characteristic models. We show that deduction based on a set of characteristic models requires only polynomial time, as it does using Horn theories. More surprisingly, abduction can be performed in polynomial time using a set of characteristic models, whereas abduction using Horn theories is NP-complete. Finally, we discuss algorithms for generating efficient representations of the Horn theory that best approximates a general set of models. 1 Introduction Logical formulas are...
Learning to Reason with a Restricted View
, 1998
"... The Learning to Reason framework combines the study of Learning and Reasoning into a single task. Within it, learning is done specifically for the purpose of reasoning with the learned knowledge. Computational considerations show that this is a useful paradigm; in some cases learning and reasoning p ..."
Abstract
-
Cited by 35 (15 self)
- Add to MetaCart
(Show Context)
The Learning to Reason framework combines the study of Learning and Reasoning into a single task. Within it, learning is done specifically for the purpose of reasoning with the learned knowledge. Computational considerations show that this is a useful paradigm; in some cases learning and reasoning problems that are intractable when studied separately become tractable when performed as a task of Learning to Reason. In this paper we study Learning to Reason problems where the interaction with the world supplies the learner only partial information in the form of partial assignments. Several natural interpretations of partial assignments are considered and learning and reasoning algorithms using these are developed. The results presented exhibit a tradeoff between learnability, the strength of the oracles used in the interface, and the range of reasoning queries the learner is guaranteed to answer correctly.