Results 11  20
of
314
A model of information retrieval based on a terminological logic
, 1993
"... According to the logical model of Information Retrieval (IR), the task of IR can be described as the extraction, from a given document base, of those documents d that, given a query q, make the formula d → q valid, where d and q are formulae of the chosen logic and “→ ” denotes the brand of logical ..."
Abstract

Cited by 101 (20 self)
 Add to MetaCart
(Show Context)
According to the logical model of Information Retrieval (IR), the task of IR can be described as the extraction, from a given document base, of those documents d that, given a query q, make the formula d → q valid, where d and q are formulae of the chosen logic and “→ ” denotes the brand of logical implication formalized by the logic in question. In this paper, although essentially subscribing to this view, we propose that the logic to be chosen for this endeavour be a Terminological Logic (TL): accordingly, the IR task becomes that of singling out those documents d such that d � q, where d and q are terms of the chosen TL and “�” denotes subsumption between terms. We call this the terminological model of IR. TLs are particularly suitable for modelling IR; in fact, they can be employed: 1) in representing documents under a variety of aspects (e.g. structural, layout, semantic content); 2) in representing queries; 3) in representing lexical, “thesaural ” knowledge. The fact that a single logical language can be used for all these representational endeavours ensures that all these sources of knowledge will participate in the retrieval process in a uniform and principled way. In this paper we introduce Mirtl, a TL for modelling IR according to the above guidelines; its syntax, formal semantics and inferential algorithm are described. 1
Background to Qualitative Decision Theory
 AI MAGAZINE
, 1999
"... This paper provides an overview of the field of qualitative decision theory: its motivating tasks and issues, its antecedents, and its prospects. Qualitative decision theory studies qualitative approaches to problems of decision making and their sound and effective reconciliation and integration ..."
Abstract

Cited by 95 (4 self)
 Add to MetaCart
This paper provides an overview of the field of qualitative decision theory: its motivating tasks and issues, its antecedents, and its prospects. Qualitative decision theory studies qualitative approaches to problems of decision making and their sound and effective reconciliation and integration with quantitative approaches. Though it inherits from a long tradition, the field offers a new focus on a number of important unanswered questions of common concern to artificial intelligence, economics, law, psychology, and management.
Markov Logic: A Unifying Framework for Statistical Relational Learning
 PROCEEDINGS OF THE ICML2004 WORKSHOP ON STATISTICAL RELATIONAL LEARNING AND ITS CONNECTIONS TO OTHER FIELDS
, 2004
"... Interest in statistical relational learning (SRL) has grown rapidly in recent years. Several key SRL tasks have been identified, and a large number of approaches have been proposed. Increasingly, a ..."
Abstract

Cited by 93 (0 self)
 Add to MetaCart
Interest in statistical relational learning (SRL) has grown rapidly in recent years. Several key SRL tasks have been identified, and a large number of approaches have been proposed. Increasingly, a
Probabilistic reasoning with answer sets
 In Proceedings of LPNMR7
, 2004
"... Abstract. We give a logic programming based account of probability and describe a declarative language Plog capable of reasoning which combines both logical and probabilistic arguments. Several nontrivial examples illustrate the use of Plog for knowledge representation. 1 ..."
Abstract

Cited by 91 (11 self)
 Add to MetaCart
Abstract. We give a logic programming based account of probability and describe a declarative language Plog capable of reasoning which combines both logical and probabilistic arguments. Several nontrivial examples illustrate the use of Plog for knowledge representation. 1
Firstorder probabilistic models for coreference resolution
 In HLT/NAACL
, 2007
"... Traditional noun phrase coreference resolution systems represent features only of pairs of noun phrases. In this paper, we propose a machine learning method that enables features over sets of noun phrases, resulting in a firstorder probabilistic model for coreference. We outline a set of approximat ..."
Abstract

Cited by 86 (20 self)
 Add to MetaCart
(Show Context)
Traditional noun phrase coreference resolution systems represent features only of pairs of noun phrases. In this paper, we propose a machine learning method that enables features over sets of noun phrases, resulting in a firstorder probabilistic model for coreference. We outline a set of approximations that make this approach practical, and apply our method to the ACE coreference dataset, achieving a 45 % error reduction over a comparable method that only considers features of pairs of noun phrases. This result demonstrates an example of how a firstorder logic representation can be incorporated into a probabilistic model and scaled efficiently. 1
Probabilistic Reasoning in Terminological Logics
, 1994
"... In this paper a probabilistic extensions for terminological knowledge representation languages is defined. Two kinds of probabilistic statements are introduced: statements about conditional probabilities between concepts and statements expressing uncertain knowledge about a specific object. The usua ..."
Abstract

Cited by 86 (5 self)
 Add to MetaCart
In this paper a probabilistic extensions for terminological knowledge representation languages is defined. Two kinds of probabilistic statements are introduced: statements about conditional probabilities between concepts and statements expressing uncertain knowledge about a specific object. The usual modeltheoretic semantics for terminological logics are extended to define interpretations for the resulting probabilistic language. It is our main objective to find an adequate modelling of the way the two kinds of probabilistic knowledge are combined in commonsense inferences of probabilistic statements. Cross entropy minimization is a technique that turns out to be very well suited for achieving this end. 1 INTRODUCTION Terminological knowledge representation languages (concept languages, terminological logics) are used to describe hierarchies of concepts. While the expressive power of the various languages that have been defined (e.g. KLONE [BS85] ALC [SSS91]) varies greatly in that ...
Reasoning about Noisy Sensors and Effectors in the Situation Calculus
, 2008
"... Agents interacting with an incompletely known world need to be able to reason about the effects of their actions, and to gain further information about that world they need to use sensors of some sort. Unfortunately, both the effects of actions and the information returned from sensors are subject t ..."
Abstract

Cited by 81 (5 self)
 Add to MetaCart
Agents interacting with an incompletely known world need to be able to reason about the effects of their actions, and to gain further information about that world they need to use sensors of some sort. Unfortunately, both the effects of actions and the information returned from sensors are subject to error. To cope with such uncertainties, the agent can maintain probabilistic beliefs about the state of the world. With probabilistic beliefs the agent will be able to quantify the likelihood of the various outcomes of its actions and is better able to utilize the information gathered from its errorprone actions and sensors. In this paper, we present a model in which we can reason about an agent’s probabilistic degrees of belief and the manner in which these beliefs change as various actions are executed. We build on a general logical theory of action developed by Reiter and others, formalized in the situation calculus. We propose a simple axiomatization that captures an agent’s state of belief and the manner in which these beliefs change when actions are executed. Our model displays a number of intuitively reasonable properties.
Logic programs with annotated disjunctions
 In Proc. Int’l Conf. on Logic Programming
, 2004
"... Abstract. Current literature offers a number of different approaches to what could generally be called "probabilistic logic programming". These are usually based on Horn clauses. Here, we introduce a new formalism, Logic Programs with Annotated Disjunctions, based on disjunctive lo ..."
Abstract

Cited by 76 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Current literature offers a number of different approaches to what could generally be called &quot;probabilistic logic programming&quot;. These are usually based on Horn clauses. Here, we introduce a new formalism, Logic Programs with Annotated Disjunctions, based on disjunctive logic programs. In this formalism, each of the disjuncts in the head of a clause is annotated with a probability. Viewing such a set of probabilistic disjunctive clauses as a probabilistic disjunction of normal logic programs allows us to derive a possible world semantics, more precisely, a probability distribution on the set of all Herbrand interpretations. We demonstrate the strength of this formalism by some examples and compare it to related work.
Probabilistic Theorem Proving
"... Many representation schemes combining firstorder logic and probability have been proposed in recent years. Progress in unifying logical and probabilistic inference has been slower. Existing methods are mainly variants of lifted variable elimination and belief propagation, neither of which take logic ..."
Abstract

Cited by 70 (23 self)
 Add to MetaCart
(Show Context)
Many representation schemes combining firstorder logic and probability have been proposed in recent years. Progress in unifying logical and probabilistic inference has been slower. Existing methods are mainly variants of lifted variable elimination and belief propagation, neither of which take logical structure into account. We propose the first method that has the full power of both graphical model inference and firstorder theorem proving (in finite domains with Herbrand interpretations). We first define probabilistic theorem proving, their generalization, as the problem of computing the probability of a logical formula given the probabilities or weights of a set of formulas. We then show how this can be reduced to the problem of lifted weighted model counting, and develop an efficient algorithm for the latter. We prove the correctness of this algorithm, investigate its properties, and show how it generalizes previous approaches. Experiments show that it greatly outperforms lifted variable elimination when logical structure is present. Finally, we propose an algorithm for approximate probabilistic theorem proving, and show that it can greatly outperform lifted belief propagation. 1