Results 1  10
of
12
Lifted probabilistic inference
 In
, 2012
"... Abstract. Many AI problems arising in a wide variety of fields such as machine learning, semantic web, network communication, computer vision, and robotics can elegantly be encoded and solved using probabilistic graphical models. Often, however, we are facing inference problems with symmetries and r ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Many AI problems arising in a wide variety of fields such as machine learning, semantic web, network communication, computer vision, and robotics can elegantly be encoded and solved using probabilistic graphical models. Often, however, we are facing inference problems with symmetries and redundancies only implicitly captured in the graph structure and, hence, not exploitable by efficient inference approaches. A prominent example are probabilistic logical models that tackle a long standing goal of AI, namely unifying firstorder logic — capturing regularities and symmetries — and probability — capturing uncertainty. Although they often encode large, complex models using few rules only and, hence, symmetries and redundancies abound, inference in them was originally still at the propositional representation level and did not exploit symmetries. This paper is intended to give a (not necessarily complete) overview and invitation to the emerging field of lifted probabilistic inference, inference techniques that exploit these symmetries in graphical models in order to speed up inference, ultimately orders of magnitude. 1
On the complexity and approximation of binary evidence in lifted inference
 In Advances in Neural Information Processing Systems 26 (NIPS
"... Lifted inference algorithms exploit symmetries in probabilistic models to speed up inference. They show impressive performance when calculating unconditional probabilities in relational models, but often resort to nonlifted inference when computing conditional probabilities. The reason is that cond ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
Lifted inference algorithms exploit symmetries in probabilistic models to speed up inference. They show impressive performance when calculating unconditional probabilities in relational models, but often resort to nonlifted inference when computing conditional probabilities. The reason is that conditioning on evidence breaks many of the model’s symmetries, which can preempt standard lifting techniques. Recent theoretical results show, for example, that conditioning on evidence which corresponds to binary relations is #Phard, suggesting that no lifting is to be expected in the worst case. In this paper, we balance this negative result by identifying the Boolean rank of the evidence as a key parameter for characterizing the complexity of conditioning in lifted inference. In particular, we show that conditioning on binary evidence with bounded Boolean rank is efficient. This opens up the possibility of approximating evidence by a lowrank Boolean matrix factorization, which we investigate both theoretically and empirically. 1
Lifted Relax, Compensate and then Recover: From Approximate to Exact Lifted Probabilistic Inference
"... We propose an approach to lifted approximate inference for firstorder probabilistic models, such as Markov logic networks. It is based on performing exact lifted inference in a simplified firstorder model, which is found by relaxing firstorder constraints, and then compensating for the relaxation ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We propose an approach to lifted approximate inference for firstorder probabilistic models, such as Markov logic networks. It is based on performing exact lifted inference in a simplified firstorder model, which is found by relaxing firstorder constraints, and then compensating for the relaxation. These simplified models can be incrementally improved by carefully recovering constraints that have been relaxed, also at the firstorder level. This leads to a spectrum of approximations, with lifted belief propagation on one end, and exact lifted inference on the other. We discuss how relaxation, compensation, and recovery can be performed, all at the firstorder level, and show empirically that our approach substantially improves on the approximations of both propositional solvers and lifted belief propagation.
Reduce and ReLift: Bootstrapped Lifted Likelihood Maximization for MAP
"... By handling whole sets of indistinguishable objects together, lifted belief propagation approaches have rendered large, previously intractable, probabilistic inference problems quickly solvable. In this paper, we show that Kumar and Zilberstein’s likelihood maximization (LM) approach to MAP inferenc ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
By handling whole sets of indistinguishable objects together, lifted belief propagation approaches have rendered large, previously intractable, probabilistic inference problems quickly solvable. In this paper, we show that Kumar and Zilberstein’s likelihood maximization (LM) approach to MAP inference is liftable, too, and actually provides additional structure for optimization. Specifically, it has been recognized that some pseudo marginals may converge quickly, turning intuitively into pseudo evidence. This additional evidence typically changes the structure of the lifted network: it may expand or reduce it. The current lifted network, however, can be viewed as an upper bound on the size of the lifted network required to finish likelihood maximization. Consequently, we relift the network only if the pseudo evidence yields a reduced network, which can efficiently be computed on the current lifted network. Our experimental results on Ising models, image segmentation and relational entity resolution demonstrate that this bootstrapped LM via “reduce and relift ” finds MAP assignments comparable to those found by the original LM approach, but in a fraction of the time.
Efficient Lifting of MAP LP Relaxations Using kLocality
"... Inference in large scale graphical models is an important task in many domains, and in particular for probabilistic relational models (e.g,. Markov logic networks). Such models often exhibit considerable symmetry, and it is a challenge to devise algorithms that exploit this symmetry to speed up infe ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Inference in large scale graphical models is an important task in many domains, and in particular for probabilistic relational models (e.g,. Markov logic networks). Such models often exhibit considerable symmetry, and it is a challenge to devise algorithms that exploit this symmetry to speed up inference. Here we address this task in the context of the MAP inference problem and its linear programming relaxations. We show that symmetry in these problems can be discovered using an elegant algorithm known as the kdimensional WeisfeilerLehman (kWL) algorithm. We run kWL on the original graphical model, and not on the far larger graph of the linear program (LP) as proposed in earlier work in the field. Furthermore, the algorithm is polynomial and thus far more practical than other previous approaches which rely on orbit partitions that are GI complete to find. The fact that kWL can be used in this manner follows from the recently introduced notion of klocal LPs and their relation to Sherali Adams relaxations of graph automorphisms. Finally, for relational models such as Markov logic networks, the benefits of our approach are even more dramatic, as we can discover symmetries in the original domain graph, as opposed to running lifting on the much larger grounded model. 1
Lifted Graphical Models: A Survey
, 2004
"... Lifted graphical models provide a language for expressing dependencies between different types of entities, their attributes, and their diverse relations, as well as techniques for probabilistic reasoning in such multirelational domains. In this survey, we review a general form for a lifted graphic ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Lifted graphical models provide a language for expressing dependencies between different types of entities, their attributes, and their diverse relations, as well as techniques for probabilistic reasoning in such multirelational domains. In this survey, we review a general form for a lifted graphical model, a parfactor graph, and show how a number of existing statistical relational representations map to this formalism. We discuss inference algorithms, including lifted inference algorithms, that efficiently compute the answers to probabilistic queries over such models. We also review work in learning lifted graphical models from data. There is a growing need for statistical relational models (whether they go by that name or another), as we are inundated with data which is a mix of structured and unstructured, with entities and relations extracted in a noisy manner from text, and with the need to reason effectively with this data. We hope that this synthesis of ideas from many different research groups will provide an accessible starting point for new researchers in this expanding field.
Approximate Lifting Techniques for Belief Propagation
"... Many AI applications need to explicitly represent relational structure as well as handle uncertainty. First order probabilistic models combine the power of logic and probability to deal with such domains. A naive approach to inference in these models is to propositionalize the whole theory and carr ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Many AI applications need to explicitly represent relational structure as well as handle uncertainty. First order probabilistic models combine the power of logic and probability to deal with such domains. A naive approach to inference in these models is to propositionalize the whole theory and carry out the inference on the ground network. Lifted inference techniques (such as lifted belief propagation; Singla and Domingos 2008) provide a more scalable approach to inference by combining together groups of objects which behave identically. In many cases, constructing the lifted network can itself be quite costly. In addition, the exact lifted network is often very close in size to the fully propositionalized model. To overcome these problems, we present approximate lifted inference, which groups together similar but distinguishable objects and treats them as if they were identical. Early stopping terminates the execution of the lifted network construction at an early stage resulting in a coarser network. Noisetolerant hypercubes allow for marginal errors in the representation of the lifted network itself. Both of our algorithms can significantly speed up the process of lifted network construction as well as result in much smaller models. The coarseness of the approximation can be adjusted depending on the accuracy required, and we can bound the resulting error. Extensive evaluation on six domains demonstrates great efficiency gains with only minor (or no) loss in accuracy. 1
Dagstuhl Seminar Proceedings 10302 Learning paradigms in dynamic environments
"... Abstract We discuss the purpose of neuralsymbolic integration including its principles, mechanisms and applications. We outline a cognitive computational model for neuralsymbolic integration, position the model in the broader context of multiagent systems, machine learning and automated reasonin ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract We discuss the purpose of neuralsymbolic integration including its principles, mechanisms and applications. We outline a cognitive computational model for neuralsymbolic integration, position the model in the broader context of multiagent systems, machine learning and automated reasoning, and list some of the challenges for the area of neuralsymbolic computation to achieve the promise of effective integration of robust learning and expressive reasoning under uncertainty. Overview The study of human behaviour is an important part of computer science, artificial intelligence (AI), neural computation, cognitive science, philosophy, psychology and other areas. Among the most prominent tools in the modelling of behaviour are computationallogic systems (classical logic, nonmonotonic logic, modal and temporal logic) and connectionist models of cognition (feedforward and recurrent networks, symmetric and deep networks, selforganising networks). Recent studies in cognitive science, artificial intelligence and evolutionary psychology have produced a number of cognitive models of reasoning, learning and language that are underpinned by computation
Nonparametric Domain Approximation for Scalable Gibbs Sampling in MLNs
"... Abstract MLNs utilize relational structures that are ubiquitous in realworld situations to represent large probabilistic graphical models compactly. However, as is now wellknown, inference complexity is one of the main bottlenecks in MLNs. Recently, several approaches have been proposed that expl ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract MLNs utilize relational structures that are ubiquitous in realworld situations to represent large probabilistic graphical models compactly. However, as is now wellknown, inference complexity is one of the main bottlenecks in MLNs. Recently, several approaches have been proposed that exploit approximate symmetries in the MLN to reduce inference complexity. These approaches approximate large domains containing many objects with much smaller domains of metaobjects (or clustercenters), so that inference is considerably faster and more scalable. However, a drawback in most of these approaches is that it is typically very hard to tune the parameters (e.g., number of clusters) such that inference is both efficient and accurate. Here, we propose a novel nonparametric approach that tradesoff solution quality with efficiency to automatically learn the optimal domain approximation. Further, we show how to perform Gibbs sampling effectively in a domainapproximated MLN by adapting the sampler according to the approximation. Our results on several benchmarks show that our approach is scalable, accurate and converges faster than existing methods.
Initial Empirical Evaluation of Anytime Lifted Belief Propagation
"... Lifted firstorder probabilistic inference, which manipulates firstorder representations of graphical models directly, has been receiving increasing attention. Most lifted inference methods to date need to process the entire given model before they can provide information on a query’s answer, e ..."
Abstract
 Add to MetaCart
(Show Context)
Lifted firstorder probabilistic inference, which manipulates firstorder representations of graphical models directly, has been receiving increasing attention. Most lifted inference methods to date need to process the entire given model before they can provide information on a query’s answer, even if most of it is determined by a relatively small, local portion of the model. Anytime Lifted Belief Propagation (ALBP) performs Lifted Belief Propagation but, instead of first building a supernode network based on the entire model, incrementally processes the model on an asneeded basis, keeping a guaranteed bound on the query’s answer the entire time. This allows a user to either detect when the answer has been already determined, before actually processing the entire model, or to choose to stop when the bound is narrow enough for the application at hand. Moreover, the bounds can be made to converge to the exact solution when inference has processed the entire model. This paper shows some preliminary results of an implementation of ALBP, illustrating how bounds can sometimes be narrowed a lot sooner than it would take to get the exact answer. 1