Results 1  10
of
73
Probabilistic Theorem Proving
"... Many representation schemes combining firstorder logic and probability have been proposed in recent years. Progress in unifying logical and probabilistic inference has been slower. Existing methods are mainly variants of lifted variable elimination and belief propagation, neither of which take logic ..."
Abstract

Cited by 70 (23 self)
 Add to MetaCart
(Show Context)
Many representation schemes combining firstorder logic and probability have been proposed in recent years. Progress in unifying logical and probabilistic inference has been slower. Existing methods are mainly variants of lifted variable elimination and belief propagation, neither of which take logical structure into account. We propose the first method that has the full power of both graphical model inference and firstorder theorem proving (in finite domains with Herbrand interpretations). We first define probabilistic theorem proving, their generalization, as the problem of computing the probability of a logical formula given the probabilities or weights of a set of formulas. We then show how this can be reduced to the problem of lifted weighted model counting, and develop an efficient algorithm for the latter. We prove the correctness of this algorithm, investigate its properties, and show how it generalizes previous approaches. Experiments show that it greatly outperforms lifted variable elimination when logical structure is present. Finally, we propose an algorithm for approximate probabilistic theorem proving, and show that it can greatly outperform lifted belief propagation. 1
Counting belief propagation
 In Proc. UAI09
, 2009
"... A major benefit of graphical models is that most knowledge is captured in the model structure. Many models, however, produce inference problems with a lot of symmetries not reflected in the graphical structure and hence not exploitable by efficient inference techniques such as belief propagation (BP ..."
Abstract

Cited by 53 (20 self)
 Add to MetaCart
(Show Context)
A major benefit of graphical models is that most knowledge is captured in the model structure. Many models, however, produce inference problems with a lot of symmetries not reflected in the graphical structure and hence not exploitable by efficient inference techniques such as belief propagation (BP). In this paper, we present a new and simple BP algorithm, called counting BP, that exploits such additional symmetries. Starting from a given factor graph, counting BP first constructs a compressed factor graph of clusternodes and clusterfactors, corresponding to sets of nodes and factors that are indistinguishable given the evidence. Then it runs a modified BP algorithm on the compressed graph that is equivalent to running BP on the original factor graph. Our experiments show that counting BP is applicable to a variety of important AI tasks such as (dynamic) relational models and boolean model counting, and that significant efficiency gains are obtainable, often by orders of magnitude. 1
Gradientbased boosting for Statistical Relational Learning: The Relational Dependency Network Case
, 2011
"... Abstract. Dependency networks approximate a joint probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, co ..."
Abstract

Cited by 39 (17 self)
 Add to MetaCart
Abstract. Dependency networks approximate a joint probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, comes at the expense of a more complex modelselection problem: an unbounded number of relational abstraction levels might need to be explored. Whereas current learning approaches for RDNs learn a single probability tree per random variable, we propose to turn the problem into a series of relational functionapproximation problems using gradientbased boosting. In doing so, one can easily induce highly complex features over several iterations and in turn estimate quickly a very expressive model. Our experimental results in several different data sets show that this boosting method results in efficient learning of RDNs when compared to stateoftheart statistical relational learning approaches. 1
Lifted Probabilistic Inference by FirstOrder Knowledge Compilation
 PROCEEDINGS OF THE TWENTYSECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
, 2011
"... Probabilistic logical languages provide powerful formalisms for knowledge representation and learning. Yet performing inference in these languages is extremely costly, especially if it is done at the propositional level. Lifted inference algorithms, which avoid repeated computation by treating indis ..."
Abstract

Cited by 37 (12 self)
 Add to MetaCart
Probabilistic logical languages provide powerful formalisms for knowledge representation and learning. Yet performing inference in these languages is extremely costly, especially if it is done at the propositional level. Lifted inference algorithms, which avoid repeated computation by treating indistinguishable groups of objects as one, help mitigate this cost. Seeking inspiration from logical inference, where lifted inference (e.g., resolution) is commonly performed, we develop a model theoretic approach to probabilistic lifted inference. Our algorithm compiles a firstorder probabilistic theory into a firstorder deterministic decomposable negation normal form (dDNNF) circuit. Compilation offers the advantage that inference is polynomial in the size of the circuit. Furthermore, by borrowing techniques from the knowledge compilation literature our algorithm effectively exploits the logical structure (e.g., contextspecific independencies) within the firstorder model, which allows more computation to be done at the lifted level. An empirical comparison demonstrates the utility of the proposed approach.
Speeding up inference in Markov logic networks by preprocessing to reduce the size of the resulting grounded network. IJCAI09
"... Statisticalrelational reasoning has received much attention due to its ability to robustly model complex relationships. A key challenge is tractable inference, especially in domains involving many objects, due to the combinatorics involved. One can accelerate inference by using approximation techni ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
Statisticalrelational reasoning has received much attention due to its ability to robustly model complex relationships. A key challenge is tractable inference, especially in domains involving many objects, due to the combinatorics involved. One can accelerate inference by using approximation techniques, “lazy ” algorithms, etc. We consider Markov Logic Networks (MLNs), which involve counting how often logical formulae are satisfied. We propose a preprocessing algorithm that can substantially reduce the effective size of MLNs by rapidly counting how often the evidence satisfies each formula, regardless of the truth values of the query literals. This is a general preprocessing method that loses no information and can be used for any MLN inference algorithm. We evaluate our algorithm empirically in three realworld domains, greatly reducing the work needed during subsequent inference. Such reduction might even allow exact inference to be performed when sampling methods would be otherwise necessary. 1
Bisimulationbased approximate lifted inference
"... There has been a great deal of recent interest in methods for performing lifted inference; however, most of this work assumes that the firstorder model is given as input to the system. Here, we describe lifted inference algorithms that determine symmetries and automatically lift the probabilistic m ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
(Show Context)
There has been a great deal of recent interest in methods for performing lifted inference; however, most of this work assumes that the firstorder model is given as input to the system. Here, we describe lifted inference algorithms that determine symmetries and automatically lift the probabilistic model to speedup inference. In particular, we describe approximate lifted inference techniques that allow the user to trade off inference accuracy for computational efficiency by using a handful of tunable parameters, while keeping the error bounded. Our algorithms are closely related to the graphtheoretic concept of bisimulation. We report experiments on both synthetic and real data to show that in the presence of symmetries, runtimes for inference can be improved significantly, with approximate lifted inference providing orders of magnitude speedup over ground inference.
Lifted Aggregation in Directed Firstorder Probabilistic Models
"... As exact inference for firstorder probabilistic graphical models at the propositional level can be formidably expensive, there is an ongoing effort to design efficient lifted inference algorithms for such models. This paper discusses directed firstorder models that require an aggregation operator ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
As exact inference for firstorder probabilistic graphical models at the propositional level can be formidably expensive, there is an ongoing effort to design efficient lifted inference algorithms for such models. This paper discusses directed firstorder models that require an aggregation operator when a parent random variable is parameterized by logical variables that are not present in a child random variable. We introduce a new data structure, aggregation parfactors, to describe aggregation in directed firstorder models. We show how to extend Milch et al.’s CFOVE algorithm to perform lifted inference in the presence of aggregation parfactors. We also show that there are cases where the polynomial time complexity (in the domain size of logical variables) of the CFOVE algorithm can be reduced to logarithmic time complexity using aggregation parfactors. 1
Lifted Inference Seen from the Other Side: The Tractable Features
"... Lifted Inference algorithms for representations that combine firstorder logic and graphical models have been the focus of much recent research. All lifted algorithms developed to date are based on the same underlying idea: take a standard probabilistic inference algorithm (e.g., variable eliminatio ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
Lifted Inference algorithms for representations that combine firstorder logic and graphical models have been the focus of much recent research. All lifted algorithms developed to date are based on the same underlying idea: take a standard probabilistic inference algorithm (e.g., variable elimination, belief propagation etc.) and improve its efficiency by exploiting repeated structure in the firstorder model. In this paper, we propose an approach from the other side in that we use techniques from logic for probabilistic inference. In particular, we define a set of rules that look only at the logical representation to identify models for which exact efficient inference is possible. Our rules yield new tractable classes that could not be solved efficiently by any of the existing techniques. 1
On the Completeness of FirstOrder Knowledge Compilation for Lifted Probabilistic Inference
"... Probabilistic logics are receiving a lot of attention today because of their expressive power for knowledge representation and learning. However, this expressivity is detrimental to the tractability of inference, when done at the propositional level. To solve this problem, various lifted inference a ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
(Show Context)
Probabilistic logics are receiving a lot of attention today because of their expressive power for knowledge representation and learning. However, this expressivity is detrimental to the tractability of inference, when done at the propositional level. To solve this problem, various lifted inference algorithms have been proposed that reason at the firstorder level, about groups of objects as a whole. Despite the existence of various lifted inference approaches, there are currently no completeness results about these algorithms. The key contribution of this paper is that we introduce a formal definition of lifted inference that allows us to reason about the completeness of lifted inference algorithms relative to a particular class of probabilistic models. We then show how to obtain a completeness result using a firstorder knowledge compilation approach for theories of formulae containing up to two logical variables. 1 Introduction and related work Probabilistic logic models build on firstorder logic to capture relational structure and on graphical