Results 1  10
of
50
PCLASSIC: A tractable probabilistic description logic
 In Proceedings of AAAI97
, 1997
"... Knowledge representation languages invariably reflect a tradeoff between expressivity and tractability. Evidence suggests that the compromise chosen by description logics is a particularly successful one. However, description logic (as for all variants of firstorder logic) is severely limited in i ..."
Abstract

Cited by 119 (4 self)
 Add to MetaCart
(Show Context)
Knowledge representation languages invariably reflect a tradeoff between expressivity and tractability. Evidence suggests that the compromise chosen by description logics is a particularly successful one. However, description logic (as for all variants of firstorder logic) is severely limited in its ability to express uncertainty. In this paper, we present PCLASSIC, a probabilistic version of the description logic CLASSIC. In addition to terminological knowledge, the language utilizes Bayesian networks to express uncertainty about the basic properties of an individual, the number of fillers for its roles, and the properties of these fillers. We provide a semantics for PCLASSIC and an effective inference procedure for probabilistic subsumption: computing the probability that a random individual in class C is also in class D. The effectiveness of the algorithm relies on independenceassumptions and on our ability to execute lifted inference: reasoning about similar individuals as a gr...
Answering Queries from ContextSensitive Probabilistic Knowledge Bases
 Theoretical Computer Science
, 1996
"... We define a language for representing contextsensitive probabilistic knowledge. A knowledge base consists of a set of universally quantified probability sentences that include context constraints, which allow inference to be focused on only the relevant portions of the probabilistic knowledge. We p ..."
Abstract

Cited by 98 (0 self)
 Add to MetaCart
(Show Context)
We define a language for representing contextsensitive probabilistic knowledge. A knowledge base consists of a set of universally quantified probability sentences that include context constraints, which allow inference to be focused on only the relevant portions of the probabilistic knowledge. We provide a declarative semantics for our language. We present a query answering procedure which takes a query Q and a set of evidence E and constructs a Bayesian network to compute P (QjE). The posterior probability is then computed using any of a number of Bayesian network inference algorithms. We use the declarative semantics to prove the query procedure sound and complete. We use concepts from logic programming to justify our approach. Keywords: reasoning under uncertainty, Bayesian networks, Probability model construction, logic programming Submitted to Theoretical Computer Science special issue on Uncertainty in Databases and Deductive Systems. This work was partially supported by NSF g...
Recognizing Planned, Multiperson Action
 Computer Vision and Image Understanding
, 2001
"... This paper demonstrates how highly structured, multiperson action can be recognized from noisy perceptual data using visually grounded goalbased primitives and loworder temporal relationships that are integrated in a probabilistic framework. The representation, which is motivated by work in mo ..."
Abstract

Cited by 79 (2 self)
 Add to MetaCart
This paper demonstrates how highly structured, multiperson action can be recognized from noisy perceptual data using visually grounded goalbased primitives and loworder temporal relationships that are integrated in a probabilistic framework. The representation, which is motivated by work in modelbased object recognition and probabilistic plan recognition, makes four principal assumptions: (1) the goals of individual agents are natural atomic representational units for specifying the temporal relationships between agents engaged in group activities, (2) a highlevel description of temporal structure of the action using a small set of loworder temporal and logical constraints is adequate for representing the relationships between the agent goals for highly structured, multiagent action recognition, (3) Bayesian networks provide a suitable mechanism for integrating multiple sources of uncertain visual perceptual feature evidence, and (4) an automatically generated Bayesian
Current Approaches to Handling Imperfect Information in Data and Knowledge Bases
, 1996
"... This paper surveys methods for representing and reasoning with imperfect information. It opens with an attempt to classify the different types of imperfection that may pervade data, and a discussion of the sources of such imperfections. The classification is then used as a framework for considering ..."
Abstract

Cited by 70 (1 self)
 Add to MetaCart
(Show Context)
This paper surveys methods for representing and reasoning with imperfect information. It opens with an attempt to classify the different types of imperfection that may pervade data, and a discussion of the sources of such imperfections. The classification is then used as a framework for considering work that explicitly concerns the representation of imperfect information, and related work on how imperfect information may be used as a basis for reasoning. The work that is surveyed is drawn from both the field of databases and the field of artificial intelligence. Both of these areas have long been concerned with the problems caused by imperfect information, and this paper stresses the relationships between the approaches developed in each.
Generating Bayesian Networks from Probability Logic Knowledge Bases
 In Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence
, 1994
"... We present a method for dynamically generating Bayesian networks from knowledge bases consisting of firstorder probability logic sentences. We present a subset of probability logic sufficient for representing the class of Bayesian networks with discretevalued nodes. We impose constraints on the fo ..."
Abstract

Cited by 60 (8 self)
 Add to MetaCart
We present a method for dynamically generating Bayesian networks from knowledge bases consisting of firstorder probability logic sentences. We present a subset of probability logic sufficient for representing the class of Bayesian networks with discretevalued nodes. We impose constraints on the form of the sentences that guarantee that the knowledge base contains all the probabilistic information necessary to generate a network. We define the concept of dseparation for knowledge bases and prove that a knowledge base with independence conditions defined by dseparation is a complete specification of a probability distribution. We present a network generation algorithm that, given an inference problem in the form of a query Q and a set of evidence E, generates a network to compute P (QjE). We prove the algorithm to be correct. 1 Introduction The flexibility of Bayesian networks for representing probabilistic dependencies and the relative efficiency of computational techniques for p...
PRISM: a language for symbolicstatistical modeling
 In Proceedings of the 15th International Joint Conference on Artificial Intelligence (IJCAIâ€™97
, 1997
"... We present an overview of symbolicstatistical modeling language PRISM whose programs are not only a probabilistic extension of logic programs but also able to learn from examples with the help of the EM learning algorithm. As a knowledge representation language appropriate for probabilistic reasoni ..."
Abstract

Cited by 53 (20 self)
 Add to MetaCart
We present an overview of symbolicstatistical modeling language PRISM whose programs are not only a probabilistic extension of logic programs but also able to learn from examples with the help of the EM learning algorithm. As a knowledge representation language appropriate for probabilistic reasoning, it can describe various types of symbolicstatistical modeling formalism known but unrelated so far in a single framework. We show by examples, together with learning results, that most popular probabilistic modeling formalisms, the hidden Markov model and Bayesian networks, are described by PRISM programs. 1
Temporaldifference networks
 In Advances in Neural Information Processing Systems 17
, 2005
"... We introduce a generalization of temporaldifference (TD) learning to networks of interrelated predictions. Rather than relating a single prediction to itself at a later time, as in conventional TD methods, a TD network relates each prediction in a set of predictions to other predictions in the set ..."
Abstract

Cited by 44 (8 self)
 Add to MetaCart
(Show Context)
We introduce a generalization of temporaldifference (TD) learning to networks of interrelated predictions. Rather than relating a single prediction to itself at a later time, as in conventional TD methods, a TD network relates each prediction in a set of predictions to other predictions in the set at a later time. TD networks can represent and apply TD learning to a much wider class of predictions than has previously been possible. Using a randomwalk example, we show that these networks can be used to learn to predict by a fixed interval, which is not possible with conventional TD methods. Secondly, we show that if the interpredictive relationships are made conditional on action, then the usual learningefficiency advantage of TD methods over Monte Carlo (supervised learning) methods becomes particularly pronounced. Thirdly, we demonstrate that TD networks can learn predictive state representations that enable exact solution of a nonMarkov problem. A very broad range of interpredictive temporal relationships can be expressed in these networks. Overall we argue that TD networks represent a substantial extension of the abilities of TD methods and bring us closer to the goal of representing world knowledge in entirely predictive, grounded terms. Temporaldifference (TD) learning is widely used in reinforcement learning methods to learn momenttomoment predictions of total future reward (value functions). In this setting, TD learning is often simpler and more dataefficient than other methods. But the idea of TD learning can be used more generally than it is in reinforcement learning. TD learning is a general method for learning predictions whenever multiple predictions are made of the same event over time, value functions being just one example. The most pertinent of the more general uses of TD learning have been in learning models of an environment or
Probabilistic Logic Learning
 ACMSIGKDD Explorations: Special issue on MultiRelational Data Mining
, 2004
"... The past few years have witnessed an significant interest in probabilistic logic learning, i.e. in research lying at the intersection of probabilistic reasoning, logical representations, and machine learning. A rich variety of di#erent formalisms and learning techniques have been developed. This pap ..."
Abstract

Cited by 43 (9 self)
 Add to MetaCart
The past few years have witnessed an significant interest in probabilistic logic learning, i.e. in research lying at the intersection of probabilistic reasoning, logical representations, and machine learning. A rich variety of di#erent formalisms and learning techniques have been developed. This paper provides an introductory survey and overview of the stateof theart in probabilistic logic learning through the identification of a number of important probabilistic, logical and learning concepts.
Adaptive Goal Recognition
 In IJCAI97  Proceedings of the Fifteenth International Joint Conference on Artificial Intelligence
, 1997
"... Because observing the same actions can warrant different conclusions depending on who executed the actions, a goal recognizer that works well on one person might not work well on another. Two problems that arise in providing userspecific recognition are how to consider the vast number of possi ..."
Abstract

Cited by 43 (0 self)
 Add to MetaCart
Because observing the same actions can warrant different conclusions depending on who executed the actions, a goal recognizer that works well on one person might not work well on another. Two problems that arise in providing userspecific recognition are how to consider the vast number of possible adaptations that might be made to the goal recognizer and how to evaluate a particular set of adaptations. For the first problem, we evaluate the use of hillclimbing to search the space of all combinations of an input set of adaptations. For the second problem, we present an algorithm that estimates the accuracy and coverage of a recognizer on a set of action sequences the individual has recently executed. We use these techniques to construct Adapt, a recognizerindependent unsupervisedlearning algorithm for adapting a recognizer to a person's idiosyncratic behaviors. Our experiments in two domains show that applying Adapt to the BOCE recognizer can improve its performance ...