Results 11  20
of
139
Using Compiled Knowledge to Guide and Focus Abductive Diagnosis
 IEEE Transactions on Knowledge and Data Engineering
, 1996
"... Several artificial intelligence architectures and systems based on "deep" models of a domain have been proposed, in particular for the diagnostic task. These systems have several advantages over traditional knowledge based systems, but they have a main limitation in their computational com ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
(Show Context)
Several artificial intelligence architectures and systems based on "deep" models of a domain have been proposed, in particular for the diagnostic task. These systems have several advantages over traditional knowledge based systems, but they have a main limitation in their computational complexity. One of the ways to face this problem is to rely on a knowledge compilation phase, which produces knowledge that can be used more effectively with respect to the original one. In this paper we show how a specific knowledge compilation approach can focus reasoning in abductive diagnosis, and, in particular, can improve the performances of AID, an abductive diagnosis system. The approach aims at focusing the overall diagnostic cycle in two interdependent ways: avoiding the generation of candidate solutions to be discarded aposteriori and integrating the generation of candidate solutions with discrimination among different candidates. Knowledge compilation is used offline to produce operational...
Analysis of Notions of Diagnosis
, 1998
"... Various formal theories have been proposed in the literature to capture the notions of diagnosis underlying diagnostic programs. Examples of such notions are: heuristic classification, which is used in systems incorporating empirical knowledge, and modelbased diagnosis, which is used in diagnostic ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
Various formal theories have been proposed in the literature to capture the notions of diagnosis underlying diagnostic programs. Examples of such notions are: heuristic classification, which is used in systems incorporating empirical knowledge, and modelbased diagnosis, which is used in diagnostic systems based on detailed domain models. Typically, such domain models include knowledge of causal, structural, and functional interactions among modelled objects. In this paper, a new settheoretical framework for the analysis of diagnosis is presented. Basically, the framework distinguishes between `evidence functions', which characterize the net impact of knowledge bases for purposes of diagnosis, and `notions of diagnosis', which define how evidence functions are to be used to map findings observed for a problem case to diagnostic solutions. This settheoretical framework offers a simple, yet powerful tool for comparing existing notions of diagnosis, as well as for proposing new notions ...
Constructive Reinforcement Learning
 International Journal of Intelligent Systems, Wiley
"... This paper presents an operative measure of reinforcement for constructive learning methods, i.e., eager learning methods using highly expressible (or universal) representation languages. These evaluation tools allow a further insight in the study of the growth of knowledge, theory revision and abdu ..."
Abstract

Cited by 19 (11 self)
 Add to MetaCart
This paper presents an operative measure of reinforcement for constructive learning methods, i.e., eager learning methods using highly expressible (or universal) representation languages. These evaluation tools allow a further insight in the study of the growth of knowledge, theory revision and abduction. The final approach is based on an apportionment of credit wrt. the ‘course ’ that the evidence makes through the learnt theory. Our measure of reinforcement is shown to be justified by crossvalidation and by the connection with other successful evaluation criteria, like the MDL principle. Finally, the relation with the classical view of reinforcement is studied, where the actions of an intelligent system can be rewarded or penalised, and we discuss whether this should affect our distribution of reinforcement. The most important result of this paper is that the way we distribute reinforcement into knowledge results in a rated ontology, instead of a single prior distribution. Therefore, this detailed information can be exploited for guiding the space search of inductive learning algorithms. Likewise, knowledge revision may be done to the part of the theory which is not justified by the
A Computational Model of Tractable Reasoning  taking inspiration from cognition
 In Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence
, 1993
"... Polynomial time complexity is the usual `threshold' for distinguishing the tractable from the intractable and it may seem reasonable to adopt this notion of tractability in the context of knowledge representation and reasoning. It is argued that doing so may be inappropriate in the contex ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
Polynomial time complexity is the usual `threshold' for distinguishing the tractable from the intractable and it may seem reasonable to adopt this notion of tractability in the context of knowledge representation and reasoning. It is argued that doing so may be inappropriate in the context of common sense reasoning underlying language understanding. A more stringent criteria of tractability is proposed. A result about reasoning that is tractable in this stronger sense is outlined. Some unusual properties of tractable reasoning emerge when the formal specification is grounded in a neurally plausible architecture. 1 Introduction Understanding language is a complex task. It involves among other things, carrying out inferences in order to establish referential and causal coherence, generate expectations, and make predictions. Nevertheless we can understand language at the rate of several hundred words per minute [ Carpenter and Just, 1977 ] . This rapid rate of language unde...
Generating Tests using Abduction
 In Proc. of KR 94
, 1994
"... Suppose we are given a theory of system behavior and a set of candidate hypotheses. Our concern is with generating tests which will discriminate these hypotheses in some fashion. We logically characterize test generation as abductive reasoning. Aside from defining the theoretical principles un ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
(Show Context)
Suppose we are given a theory of system behavior and a set of candidate hypotheses. Our concern is with generating tests which will discriminate these hypotheses in some fashion. We logically characterize test generation as abductive reasoning. Aside from defining the theoretical principles underlying test generation, we are able to bring to bear the abundant research on abduction to show how test generation can be embodied in working systems. Furthermore, we address the issue of computational complexity. It has long been known that test generation is NPcomplete. This is consistent with complexity results on the generation of abductive explanations. By syntactically restricting the description of our theory of system behavior or by limiting the completeness of our abductive reasoning, we are able to gain insight into tractable test generation problems. 1 INTRODUCTION Diagnostic reasoning is often viewed as an iterative generateandtest process. Given a description...
Distributed belief revision versus distributed truth maintenance
 In Proceedings of the Sixth IEEE International Conference on Tools with Artificial Intelligence (TAI 94
, 1994
"... ..."
Complexity of abduction in the el family of lightweight description logics
 in KR, G. Brewka and J. Lang, Eds. AAAI Press
"... The complexity of logicbased abduction has been extensively studied for the case in which the background knowledge is represented by a propositional theory, but very little is known about abduction with respect to description logic knowledge bases. The purpose of the current paper is to examine the ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
(Show Context)
The complexity of logicbased abduction has been extensively studied for the case in which the background knowledge is represented by a propositional theory, but very little is known about abduction with respect to description logic knowledge bases. The purpose of the current paper is to examine the complexity of logicbased abduction for the EL family of lightweight description logics. We consider several minimality criteria for explanations (set inclusion, cardinality, prioritization, and weight) and three decision problems: deciding whether an explanation exists, deciding whether a given hypothesis appears in some acceptable explanation, and deciding whether a given hypothesis belongs to every acceptable explanation. We determine the complexity of these tasks for general TBoxes and also for EL and EL + terminologies. We also provide results concerning the complexity of computing abductive explanations.
Abduction, Experience, and Goals: A Model of Everyday Abductive Explanation
 JOURNAL OF EXPERIMENTAL AND THEORETICAL ARTIFICIAL INTELLIGENCE
, 1995
"... Many abductive understanding systems generate explanations by a backwards chaining process that is neutral both to the explainer's previous experience in similar situations and to why the explainer is attempting to explain. This article examines the relationship of such models to an approach ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
Many abductive understanding systems generate explanations by a backwards chaining process that is neutral both to the explainer's previous experience in similar situations and to why the explainer is attempting to explain. This article examines the relationship of such models to an approach that uses casebased reasoning to generate explanations. In this
What makes propositional abduction tractable
 Artificial Intelligence
"... Abduction is a fundamental form of nonmonotonic reasoning that aims at finding explanations for observed manifestations. This process underlies many applications, from car configuration to medical diagnosis. We study here the computational complexity of deciding whether an explanation exists in the ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
Abduction is a fundamental form of nonmonotonic reasoning that aims at finding explanations for observed manifestations. This process underlies many applications, from car configuration to medical diagnosis. We study here the computational complexity of deciding whether an explanation exists in the case when the application domain is described by a propositional knowledge base. Building on previous results, we classify the complexity for local restrictions on the knowledge base and under various restrictions on hypotheses and manifestations. In comparison to the many previous studies on the complexity of abduction we are able to give a much more detailed picture for the complexity of the basic problem of deciding the existence of an explanation. It turns out that depending on the restrictions, the problem in this framework is always polynomialtime solvable, NPcomplete, coNPcomplete, or ΣP2complete. Based on these results, we give an a posteriori justification of what makes propositional abduction hard even for some classes of knowledge bases which allow for efficient satisfiability testing and deduction. This justification is very simple and intuitive, but it reveals that no nontrivial class of abduction problems is tractable. Indeed, tractability essentially requires that the language for knowledge bases is unable to express both causal links and conflicts between hypotheses. This generalizes a similar observation by Bylander et al. for setcovering abduction.
Computing minimal diagnoses by greedy stochastic search
 IN PROC. AAAI'08
, 2008
"... Most algorithms for computing diagnoses within a modelbased diagnosis framework are deterministic. Such algorithms guarantee soundness and completeness, but are ΣP2hard. To overcome this complexity problem, which prohibits the computation of highcardinality diagnoses for large systems, we propose ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
Most algorithms for computing diagnoses within a modelbased diagnosis framework are deterministic. Such algorithms guarantee soundness and completeness, but are ΣP2hard. To overcome this complexity problem, which prohibits the computation of highcardinality diagnoses for large systems, we propose a novel approximation approach for multiplefault diagnosis, based on a greedy stochastic algorithm called SAFARI (StochAstic Fault diagnosis AlgoRIthm). We prove that SAFARI can be configured to compute diagnoses which are of guaranteed minimality under subsumption. We analytically model SAFARI search as a Markov chain, and show a probabilistic bound on the minimality of its minimal diagnosis approximations. We have applied this algorithm to the 74XXX and ISCAS85 suites of benchmark combinatorial circuits, demonstrating orderofmagnitude speedups over two stateoftheart deterministic algorithms, CDA ∗ and HA∗, for multiplefault diagnoses.