Results 1 - 10
of
126
Lifted first-order belief propagation
- In Association for the Advancement of Artificial Intelligence (AAAI
, 2008
"... Unifying first-order logic and probability is a long-standing goal of AI, and in recent years many representations combining aspects of the two have been proposed. However, inference in them is generally still at the level of propositional logic, creating all ground atoms and formulas and applying s ..."
Abstract
-
Cited by 115 (15 self)
- Add to MetaCart
Unifying first-order logic and probability is a long-standing goal of AI, and in recent years many representations combining aspects of the two have been proposed. However, inference in them is generally still at the level of propositional logic, creating all ground atoms and formulas and applying standard probabilistic inference methods to the resulting network. Ideally, inference should be lifted as in first-order logic, handling whole sets of indistinguishable objects together, in time independent of their cardinality. Poole (2003) and Braz et al. (2005, 2006) developed a lifted version of the variable elimination algorithm, but it is extremely complex, generally does not scale to realistic domains, and has only been applied to very small artificial problems. In this paper we propose the first lifted version of a scalable probabilistic inference algorithm, belief propagation (loopy or not). Our approach is based on first constructing a lifted network, where each node represents a set of ground atoms that all pass the same messages during belief propagation. We then run belief propagation on this network. We prove the correctness and optimality of our algorithm. Experiments show that it can greatly reduce the cost of inference.
First-order probabilistic models for coreference resolution
- In HLT/NAACL
, 2007
"... Traditional noun phrase coreference resolution systems represent features only of pairs of noun phrases. In this paper, we propose a machine learning method that enables features over sets of noun phrases, resulting in a first-order probabilistic model for coreference. We outline a set of approximat ..."
Abstract
-
Cited by 86 (20 self)
- Add to MetaCart
(Show Context)
Traditional noun phrase coreference resolution systems represent features only of pairs of noun phrases. In this paper, we propose a machine learning method that enables features over sets of noun phrases, resulting in a first-order probabilistic model for coreference. We outline a set of approximations that make this approach practical, and apply our method to the ACE coreference dataset, achieving a 45 % error reduction over a comparable method that only considers features of pairs of noun phrases. This result demonstrates an example of how a firstorder logic representation can be incorporated into a probabilistic model and scaled efficiently. 1
Lifted probabilistic inference with counting formulas
- Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (AAAI-2008
, 2008
"... Lifted inference algorithms exploit repeated structure in probabilistic models to answer queries efficiently. Previous work such as de Salvo Braz et al.’s first-order variable elimination (FOVE) has focused on the sharing of potentials across interchangeable random variables. In this paper, we also ..."
Abstract
-
Cited by 73 (11 self)
- Add to MetaCart
Lifted inference algorithms exploit repeated structure in probabilistic models to answer queries efficiently. Previous work such as de Salvo Braz et al.’s first-order variable elimination (FOVE) has focused on the sharing of potentials across interchangeable random variables. In this paper, we also exploit interchangeability within individual potentials by introducing counting formulas, which indicate how many of the random variables in a set have each possible value. We present a new lifted inference algorithm, C-FOVE, that not only handles counting formulas in its input, but also creates counting formulas for use in intermediate potentials. C-FOVE can be described succinctly in terms of six operators, along with heuristics for when to apply them. Because counting formulas capture dependencies among large numbers of variables compactly, C-FOVE achieves asymptotic speed improvements compared to FOVE.
Probabilistic Theorem Proving
"... Many representation schemes combining firstorder logic and probability have been proposed in recent years. Progress in unifying logical and probabilistic inference has been slower. Existing methods are mainly variants of lifted variable elimination and belief propagation, neither of which take logic ..."
Abstract
-
Cited by 70 (23 self)
- Add to MetaCart
(Show Context)
Many representation schemes combining firstorder logic and probability have been proposed in recent years. Progress in unifying logical and probabilistic inference has been slower. Existing methods are mainly variants of lifted variable elimination and belief propagation, neither of which take logical structure into account. We propose the first method that has the full power of both graphical model inference and first-order theorem proving (in finite domains with Herbrand interpretations). We first define probabilistic theorem proving, their generalization, as the problem of computing the probability of a logical formula given the probabilities or weights of a set of formulas. We then show how this can be reduced to the problem of lifted weighted model counting, and develop an efficient algorithm for the latter. We prove the correctness of this algorithm, investigate its properties, and show how it generalizes previous approaches. Experiments show that it greatly outperforms lifted variable elimination when logical structure is present. Finally, we propose an algorithm for approximate probabilistic theorem proving, and show that it can greatly outperform lifted belief propagation. 1
Compiling relational bayesian networks for exact inference
- International Journal of Approximate Reasoning
, 2004
"... We describe in this paper a system for exact inference with relational Bayesian networks as defined in the publicly available Primula tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by evalua ..."
Abstract
-
Cited by 69 (15 self)
- Add to MetaCart
(Show Context)
We describe in this paper a system for exact inference with relational Bayesian networks as defined in the publicly available Primula tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by evaluating and differentiating these circuits in time linear in their size. We report on experimental results showing successful compilation and efficient inference on relational Bayesian networks, whose Primula–generated propositional instances have thousands of variables, and whose jointrees have clusters with hundreds of variables.
Memory-efficient inference in relational domains
- In Proceedings of the Twenty-First National Conference on Artificial Intelligence
, 2006
"... Propositionalization of a first-order theory followed by satisfiability testing has proved to be a remarkably efficient approach to inference in relational domains such as planning (Kautz & Selman 1996) and verification (Jackson 2000). More recently, weighted satisfiability solvers have been use ..."
Abstract
-
Cited by 48 (9 self)
- Add to MetaCart
Propositionalization of a first-order theory followed by satisfiability testing has proved to be a remarkably efficient approach to inference in relational domains such as planning (Kautz & Selman 1996) and verification (Jackson 2000). More recently, weighted satisfiability solvers have been used successfully for MPE inference in statistical relational learners (Singla & Domingos 2005). However, fully instantiating a finite first-order theory requires memory on the order of the number of constants raised to the arity of the clauses, which significantly limits the size of domains it can be applied to. In this paper we propose LazySAT, a variation of the Walk-SAT solver that avoids this blowup by taking advantage of the extreme sparseness that is typical of relational domains (i.e., only a small fraction of ground atoms are true, and most clauses are trivially satisfied). Experiments on entity resolution and planning problems show that LazySAT reduces memory usage by orders of magnitude compared to Walk-SAT, while taking comparable time to run and producing the same solutions.
Lifted Probabilistic Inference by First-Order Knowledge Compilation
- PROCEEDINGS OF THE TWENTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
, 2011
"... Probabilistic logical languages provide powerful formalisms for knowledge representation and learning. Yet performing inference in these languages is extremely costly, especially if it is done at the propositional level. Lifted inference algorithms, which avoid repeated computation by treating indis ..."
Abstract
-
Cited by 37 (12 self)
- Add to MetaCart
Probabilistic logical languages provide powerful formalisms for knowledge representation and learning. Yet performing inference in these languages is extremely costly, especially if it is done at the propositional level. Lifted inference algorithms, which avoid repeated computation by treating indistinguishable groups of objects as one, help mitigate this cost. Seeking inspiration from logical inference, where lifted inference (e.g., resolution) is commonly performed, we develop a model theoretic approach to probabilistic lifted inference. Our algorithm compiles a first-order probabilistic theory into a first-order deterministic decomposable negation normal form (d-DNNF) circuit. Compilation offers the advantage that inference is polynomial in the size of the circuit. Furthermore, by borrowing techniques from the knowledge compilation literature our algorithm effectively exploits the logical structure (e.g., context-specific independencies) within the first-order model, which allows more computation to be done at the lifted level. An empirical comparison demonstrates the utility of the proposed approach.
Exploiting Shared Correlations in Probabilistic Databases
, 2008
"... There has been a recent surge in work in probabilistic databases, propelled in large part by the huge increase in noisy data sources — from sensor data, experimental data, data from uncurated sources, and many others. There is a growing need for database management systems that can efficiently repre ..."
Abstract
-
Cited by 36 (6 self)
- Add to MetaCart
There has been a recent surge in work in probabilistic databases, propelled in large part by the huge increase in noisy data sources — from sensor data, experimental data, data from uncurated sources, and many others. There is a growing need for database management systems that can efficiently represent and query such data. In this work, we show how data characteristics can be leveraged to make the query evaluation process more efficient. In particular, we exploit what we refer to as shared correlations where the same uncertainties and correlations occur repeatedly in the data. Shared correlations occur mainly due to two reasons: (1) Uncertainty and correlations usually come from general statistics and rarely vary on a tuple-to-tuple basis; (2) The query evaluation procedure itself tends to re-introduce the same correlations. Prior work has shown that the query evaluation problem on probabilistic databases is equivalent to a probabilistic inference problem on an appropriately constructed probabilistic graphical model (PGM). We leverage this by introducing a new data structure, called the random variable elimination graph (rv-elim graph) that can be built from the PGM obtained from query evaluation. We develop techniques based on bisimulation that can be used to compress the rv-elim graph exploiting the presence of shared correlations in the PGM, the compressed rv-elim graph can then be used to run inference. We validate our methods by evaluating them empirically and show that even with a few shared correlations significant speed-ups are possible.
MPE and partial inversion in lifted probabilistic variable elimination
- IN: PROCEEDINGS OF THE NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE. (2006
, 2006
"... It is often convenient to represent probabilistic models in a first-order fashion, using logical atoms such as random variables parameterized by logical variables. (de Salvo Braz et al., 2005), following (Poole, 2003), give a lifted variable elimination algorithm (FOVE) for computing marginal probab ..."
Abstract
-
Cited by 28 (3 self)
- Add to MetaCart
(Show Context)
It is often convenient to represent probabilistic models in a first-order fashion, using logical atoms such as random variables parameterized by logical variables. (de Salvo Braz et al., 2005), following (Poole, 2003), give a lifted variable elimination algorithm (FOVE) for computing marginal probabilities from first-order probabilistic models (belief assessment, or BA). FOVE is lifted because it works directly at the first-order level, eliminating all the instantiations of a set of atoms in a single step. Previous work could treat only restricted potential functions. There, atoms ’ instantiations cannot constrain each other: predicates can appear at most once, or logical variables must not interact across atoms. In this paper, we present two contributions. The first one is a significantly more general lifted variable elimination algorithm, FOVE-P, that covers many cases where atoms share logical variables. The second contribution is to use FOVE-P for solving the Most Probable Explanation (MPE) problem, which consists of calculating the most probable assignment of the random variables in a model. We introduce the notion of lifted assignments, a distribution of values to a set of random variables rather than to each individual one. Lifted assignments are cheaper to compute while being as useful as regular assignments over that group. Both contributions advance the theoretical understanding of lifted probabilistic inference.