Results 1 - 10
of
809
A Theory of Diagnosis from First Principles
- ARTIFICIAL INTELLIGENCE
, 1987
"... Suppose one is given a description of a system, together with an observation of the system's behaviour which conflicts with the way the system is meant to behave. The diagnostic problem is to determine those components of the system which, when assumed to be functioning abnormally, will explain ..."
Abstract
-
Cited by 1120 (5 self)
- Add to MetaCart
Suppose one is given a description of a system, together with an observation of the system's behaviour which conflicts with the way the system is meant to behave. The diagnostic problem is to determine those components of the system which, when assumed to be functioning abnormally, will explain the discrepancy between the observed and correct system behaviour. We propose a general theory for this problem. The theory requires only that the system be described in a suitable logic. Moreover, there are many such suitable logics, e.g. first-order, temporal, dynamic, etc. As a result, the theory accommodates diagnostic reasoning in a wide variety of practical settings, including digital and analogue circuits, medicine, and database updates. The theory leads to an algorithm for computing all diagnoses, and to various results concerning principles of measurement for discriminating among competing diagnoses. Finally, the theory reveals close connections between diagnostic reasoning and nonmonotonic reasoning.
Qualitative Simulation
- Artificial Intelligence
, 2001
"... Qualitative simulation predicts the set of possible behaviors... ..."
Abstract
-
Cited by 520 (32 self)
- Add to MetaCart
(Show Context)
Qualitative simulation predicts the set of possible behaviors...
Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks
"... Particle filters (PFs) are powerful sampling-based inference/learning algorithms for dynamic Bayesian networks (DBNs). They allow us to treat, in a principled way, any type of probability distribution, nonlinearity and non-stationarity. They have appeared in several fields under such names as “conde ..."
Abstract
-
Cited by 348 (11 self)
- Add to MetaCart
Particle filters (PFs) are powerful sampling-based inference/learning algorithms for dynamic Bayesian networks (DBNs). They allow us to treat, in a principled way, any type of probability distribution, nonlinearity and non-stationarity. They have appeared in several fields under such names as “condensation”, “sequential Monte Carlo” and “survival of the fittest”. In this paper, we show how we can exploit the structure of the DBN to increase the efficiency of particle filtering, using a technique known as Rao-Blackwellisation. Essentially, this samples some of the variables, and marginalizes out the rest exactly, using the Kalman filter, HMM filter, junction tree algorithm, or any other finite dimensional optimal filter. We show that Rao-Blackwellised particle filters (RBPFs) lead to more accurate estimates than standard PFs. We demonstrate RBPFs on two problems, namely non-stationary online regression with radial basis function networks and robot localization and map building. We also discuss other potential application areas and provide references to some Þnite dimensional optimal filters.
Probabilistic Horn abduction and Bayesian networks
- Artificial Intelligence
, 1993
"... This paper presents a simple framework for Horn-clause abduction, with probabilities associated with hypotheses. The framework incorporates assumptions about the rule base and independence assumptions amongst hypotheses. It is shown how any probabilistic knowledge representable in a discrete Bayesia ..."
Abstract
-
Cited by 328 (38 self)
- Add to MetaCart
(Show Context)
This paper presents a simple framework for Horn-clause abduction, with probabilities associated with hypotheses. The framework incorporates assumptions about the rule base and independence assumptions amongst hypotheses. It is shown how any probabilistic knowledge representable in a discrete Bayesian belief network can be represented in this framework. The main contribution is in finding a relationship between logical and probabilistic notions of evidential reasoning. This provides a useful representation language in its own right, providing a compromise between heuristic and epistemic adequacy. It also shows how Bayesian networks can be extended beyond a propositional language. This paper also shows how a language with only (unconditionally) independent hypotheses can represent any probabilistic knowledge, and argues that it is better to invent new hypotheses to explain dependence rather than having to worry about dependence in the language. Scholar, Canadian Institute for Advanced...
A Model-based Approach to Reactive Self-Configuring Systems
- In Proceedings of AAAI-96
, 1996
"... This paper describes Livingstone, an implemented kernel for a model-based reactive self-configuring autonomous system. It presents a formal characterization of Livingstone's representation formalism, and reports on our experience with the implementation in a variety of domains. Livingstone prov ..."
Abstract
-
Cited by 245 (43 self)
- Add to MetaCart
This paper describes Livingstone, an implemented kernel for a model-based reactive self-configuring autonomous system. It presents a formal characterization of Livingstone's representation formalism, and reports on our experience with the implementation in a variety of domains. Livingstone provides a reactive system that performs significant deduction in the sense/response loop by drawing on our past experience at building fast propositional conflict-based algorithms for model-based diagnosis, and by framing a model-based configuration manager as a propositional feedback controller that generates focused, optimal responses. Livingstone's representation formalism achieves broad coverage of hybrid hardware/software systems by coupling the transition system models underlying concurrent reactive languages with the qualitative representations developed in model-based reasoning. Livingstone automates a wide variety of tasks using a single model and a single core algorithm, thus making signif...
Remote Agent: To Boldly Go Where No AI System Has Gone Before
, 1998
"... Renewed motives for space exploration have inspired NASA to work toward the goal of establishing a virtual presence in space, through heterogeneous effets of robotic explorers. Information technology, and Artificial Intelligence in particular, will play a central role in this endeavor by endowing th ..."
Abstract
-
Cited by 231 (16 self)
- Add to MetaCart
(Show Context)
Renewed motives for space exploration have inspired NASA to work toward the goal of establishing a virtual presence in space, through heterogeneous effets of robotic explorers. Information technology, and Artificial Intelligence in particular, will play a central role in this endeavor by endowing these explorers with a form of computational intelligence that we call remote agents. In this paper we describe the Remote Agent, a specific autonomous agent architecture based on the principles of model-based programming, on-board deduction and search, and goal-directed closed-loop commanding, that takes a significant step toward enabling this future. This architecture addresses the unique characteristics of the spacecraft domain that require highly reliable autonomous operations over long periods of time with tight deadlines, resource constraints, and concurrent activity among tightly coupled subsystems. The Remote Agent integrates constraint-based temporal planning and scheduling, robust multi-threaded execution, and model-based mode identification and reconfiguration. The demonstration of the integrated system as an on-board controller for Deep Space One, NASA's rst New Millennium mission, is scheduled for a period of a week in late 1998. The development of the Remote Agent also provided the opportunity to reassess some of AI's conventional wisdom about the challenges of implementing embedded systems, tractable reasoning, and knowledge representation. We discuss these issues, and our often contrary experiences, throughout the paper.
Identifying the minimal transversals of a hypergraph and related problems
- SIAM Journal on Computing
, 1995
"... The paper considers two decision problems on hypergraphs, hypergraph saturation and recognition of the transversal hypergraph, and discusses their significance for several search problems in applied computer science. Hypergraph saturation, i.e., given a hypergraph H, decide if every subset of vertic ..."
Abstract
-
Cited by 155 (8 self)
- Add to MetaCart
The paper considers two decision problems on hypergraphs, hypergraph saturation and recognition of the transversal hypergraph, and discusses their significance for several search problems in applied computer science. Hypergraph saturation, i.e., given a hypergraph H, decide if every subset of vertices is contained in or contains some edge of H, is shown to be co-NP-complete. A certain subproblem of hypergraph saturation, the saturation of simple hypergraphs, is shown to be computationally equivalent to transversal hypergraph recognition, i.e., given two hypergraphs H 1; H 2, decide if the sets in H 2 are all the minimal transversals of H 1. The complexity of the search problem related to the recognition of the transversal hypergraph, the computation of the transversal hypergraph, is an open problem. This task needs time exponential in the input size, but it is unknown whether an output-polynomial algorithm exists for this problem. For several important subcases, for instance if an upper or lower bound is imposed on the edge size or for acyclic hypergraphs, we present output-polynomial algorithms. Computing or recognizing the minimal transversals of a hypergraph is a frequent problem in practice, which is pointed out by identifying important applications in database theory, Boolean switching theory, logic, and AI, particularly in model-based diagnosis.
Truth Maintenance
, 1990
"... General purpose truth maintenance systems have received considerable attention in the past few years. This paper discusses the functionality of truth maintenance systems and compares various existing algorithms. Applications and directions for future research are also discussed. Introduction In 197 ..."
Abstract
-
Cited by 140 (3 self)
- Add to MetaCart
General purpose truth maintenance systems have received considerable attention in the past few years. This paper discusses the functionality of truth maintenance systems and compares various existing algorithms. Applications and directions for future research are also discussed. Introduction In 1978 Jon Doyle wrote a masters thesis at the MIT AI Laboratory entitled "Truth Maintenance Systems for Problem Solving" [ Doyle, 1979 ] . In this thesis Doyle described an independent module called a truth maintenance system, or TMS, which maintained beliefs for general problem solving systems. In the twelve years since the appearance of Doyle's TMS a large body of literature has accumulated on truth maintenance. The seminal idea appears not to have been any particular technical mechanism but rather the general concept of an independent module for truth (or belief) maintenance. All truth maintenance systems manipulate proposition symbols and relationships between proposition symbols. I will use...
The Computational Complexity of Abduction
, 1991
"... The problem of abduction can be characterized as finding the best explanation of a set of data. In this paper we focus on one type of abduction in which the best explanation is the most plausible combination of hypotheses that explains all the data. We then present several computational complexity r ..."
Abstract
-
Cited by 139 (6 self)
- Add to MetaCart
The problem of abduction can be characterized as finding the best explanation of a set of data. In this paper we focus on one type of abduction in which the best explanation is the most plausible combination of hypotheses that explains all the data. We then present several computational complexity results demonstrating that this type of abduction is intractable (NP-hard) in general. In particular, choosing between incompatible hypotheses, reasoning about cancellation effects among hypotheses, and satisfying the maximum plausibility requirement are major factors leading to intractability. We also identify a tractable, but restricted, class of abduction problems. Thanks to B. Chandrasekaran, Ashok Goel, Jack Smith, and Jon Sticklen for their comments on the numerous versions of this paper. The referees have also made a substantial contribution. Any remaining errors are our responsibility, of course. This research has been supported in part by the National Library of Medicine, grant LM-...
Decomposable negation normal form
- JOURNAL OF THE ACM
, 2001
"... Knowledge compilation has been emerging recently as a new direction of research for dealing with the computational intractability of general propositional reasoning. According to this approach, the reasoning process is split into two phases: an off-line compilation phase and an online query-answer ..."
Abstract
-
Cited by 128 (17 self)
- Add to MetaCart
Knowledge compilation has been emerging recently as a new direction of research for dealing with the computational intractability of general propositional reasoning. According to this approach, the reasoning process is split into two phases: an off-line compilation phase and an online query-answering phase. In the off-line phase, the propositional theory is compiled into some target language, which is typically a tractable one. In the on-line phase, the compiled target is used to efficiently answer a (potentially) exponential number of queries. The main motivation behind knowledge compilation is to push as much of the computational overhead as possible into the offline phase, in order to amortize that overhead over all on-line queries. Another motivation behind compilation is to produce very simple on-line reasoning systems, which can be embedded cost-effectively into primitive computational platforms, such as those found in consumer electronics. One of the key aspects of any compilation approach is the target language into which the propositional theory is compiled. Previous target languages included Horn theories, prime implicates/implicants and ordered binary decision diagrams (OBDDs). We propose in this paper a new target compilation language, known as decomposable negation normal form (DNNF), and present a number of its properties that make it of interest to the broad community. Specifically, we