Results 1  10
of
10
Lifted probabilistic inference
 In
, 2012
"... Abstract. Many AI problems arising in a wide variety of fields such as machine learning, semantic web, network communication, computer vision, and robotics can elegantly be encoded and solved using probabilistic graphical models. Often, however, we are facing inference problems with symmetries and r ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Many AI problems arising in a wide variety of fields such as machine learning, semantic web, network communication, computer vision, and robotics can elegantly be encoded and solved using probabilistic graphical models. Often, however, we are facing inference problems with symmetries and redundancies only implicitly captured in the graph structure and, hence, not exploitable by efficient inference approaches. A prominent example are probabilistic logical models that tackle a long standing goal of AI, namely unifying firstorder logic — capturing regularities and symmetries — and probability — capturing uncertainty. Although they often encode large, complex models using few rules only and, hence, symmetries and redundancies abound, inference in them was originally still at the propositional representation level and did not exploit symmetries. This paper is intended to give a (not necessarily complete) overview and invitation to the emerging field of lifted probabilistic inference, inference techniques that exploit these symmetries in graphical models in order to speed up inference, ultimately orders of magnitude. 1
On the complexity and approximation of binary evidence in lifted inference
 In Advances in Neural Information Processing Systems 26 (NIPS
"... Lifted inference algorithms exploit symmetries in probabilistic models to speed up inference. They show impressive performance when calculating unconditional probabilities in relational models, but often resort to nonlifted inference when computing conditional probabilities. The reason is that cond ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
Lifted inference algorithms exploit symmetries in probabilistic models to speed up inference. They show impressive performance when calculating unconditional probabilities in relational models, but often resort to nonlifted inference when computing conditional probabilities. The reason is that conditioning on evidence breaks many of the model’s symmetries, which can preempt standard lifting techniques. Recent theoretical results show, for example, that conditioning on evidence which corresponds to binary relations is #Phard, suggesting that no lifting is to be expected in the worst case. In this paper, we balance this negative result by identifying the Boolean rank of the evidence as a key parameter for characterizing the complexity of conditioning in lifted inference. In particular, we show that conditioning on binary evidence with bounded Boolean rank is efficient. This opens up the possibility of approximating evidence by a lowrank Boolean matrix factorization, which we investigate both theoretically and empirically. 1
Lifted Relax, Compensate and then Recover: From Approximate to Exact Lifted Probabilistic Inference
"... We propose an approach to lifted approximate inference for firstorder probabilistic models, such as Markov logic networks. It is based on performing exact lifted inference in a simplified firstorder model, which is found by relaxing firstorder constraints, and then compensating for the relaxation ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We propose an approach to lifted approximate inference for firstorder probabilistic models, such as Markov logic networks. It is based on performing exact lifted inference in a simplified firstorder model, which is found by relaxing firstorder constraints, and then compensating for the relaxation. These simplified models can be incrementally improved by carefully recovering constraints that have been relaxed, also at the firstorder level. This leads to a spectrum of approximations, with lifted belief propagation on one end, and exact lifted inference on the other. We discuss how relaxation, compensation, and recovery can be performed, all at the firstorder level, and show empirically that our approach substantially improves on the approximations of both propositional solvers and lifted belief propagation. 1
Efficient Lifting of MAP LP Relaxations Using kLocality
"... Inference in large scale graphical models is an important task in many domains, and in particular for probabilistic relational models (e.g,. Markov logic networks). Such models often exhibit considerable symmetry, and it is a challenge to devise algorithms that exploit this symmetry to speed up infe ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Inference in large scale graphical models is an important task in many domains, and in particular for probabilistic relational models (e.g,. Markov logic networks). Such models often exhibit considerable symmetry, and it is a challenge to devise algorithms that exploit this symmetry to speed up inference. Here we address this task in the context of the MAP inference problem and its linear programming relaxations. We show that symmetry in these problems can be discovered using an elegant algorithm known as the kdimensional WeisfeilerLehman (kWL) algorithm. We run kWL on the original graphical model, and not on the far larger graph of the linear program (LP) as proposed in earlier work in the field. Furthermore, the algorithm is polynomial and thus far more practical than other previous approaches which rely on orbit partitions that are GI complete to find. The fact that kWL can be used in this manner follows from the recently introduced notion of klocal LPs and their relation to Sherali Adams relaxations of graph automorphisms. Finally, for relational models such as Markov logic networks, the benefits of our approach are even more dramatic, as we can discover symmetries in the original domain graph, as opposed to running lifting on the much larger grounded model. 1
Reduce and ReLift: Bootstrapped Lifted Likelihood Maximization for MAP
"... By handling whole sets of indistinguishable objects together, lifted belief propagation approaches have rendered large, previously intractable, probabilistic inference problems quickly solvable. In this paper, we show that Kumar and Zilberstein’s likelihood maximization (LM) approach to MAP inferenc ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
By handling whole sets of indistinguishable objects together, lifted belief propagation approaches have rendered large, previously intractable, probabilistic inference problems quickly solvable. In this paper, we show that Kumar and Zilberstein’s likelihood maximization (LM) approach to MAP inference is liftable, too, and actually provides additional structure for optimization. Specifically, it has been recognized that some pseudo marginals may converge quickly, turning intuitively into pseudo evidence. This additional evidence typically changes the structure of the lifted network: it may expand or reduce it. The current lifted network, however, can be viewed as an upper bound on the size of the lifted network required to finish likelihood maximization. Consequently, we relift the network only if the pseudo evidence yields a reduced network, which can efficiently be computed on the current lifted network. Our experimental results on Ising models, image segmentation and relational entity resolution demonstrate that this bootstrapped LM via “reduce and relift ” finds MAP assignments comparable to those found by the original LM approach, but in a fraction of the time.
Approximate Lifting Techniques for Belief Propagation
"... Many AI applications need to explicitly represent relational structure as well as handle uncertainty. First order probabilistic models combine the power of logic and probability to deal with such domains. A naive approach to inference in these models is to propositionalize the whole theory and carr ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Many AI applications need to explicitly represent relational structure as well as handle uncertainty. First order probabilistic models combine the power of logic and probability to deal with such domains. A naive approach to inference in these models is to propositionalize the whole theory and carry out the inference on the ground network. Lifted inference techniques (such as lifted belief propagation; Singla and Domingos 2008) provide a more scalable approach to inference by combining together groups of objects which behave identically. In many cases, constructing the lifted network can itself be quite costly. In addition, the exact lifted network is often very close in size to the fully propositionalized model. To overcome these problems, we present approximate lifted inference, which groups together similar but distinguishable objects and treats them as if they were identical. Early stopping terminates the execution of the lifted network construction at an early stage resulting in a coarser network. Noisetolerant hypercubes allow for marginal errors in the representation of the lifted network itself. Both of our algorithms can significantly speed up the process of lifted network construction as well as result in much smaller models. The coarseness of the approximation can be adjusted depending on the accuracy required, and we can bound the resulting error. Extensive evaluation on six domains demonstrates great efficiency gains with only minor (or no) loss in accuracy. 1
Initial Empirical Evaluation of Anytime Lifted Belief Propagation
"... Lifted firstorder probabilistic inference, which manipulates firstorder representations of graphical models directly, has been receiving increasing attention. Most lifted inference methods to date need to process the entire given model before they can provide information on a query’s answer, e ..."
Abstract
 Add to MetaCart
(Show Context)
Lifted firstorder probabilistic inference, which manipulates firstorder representations of graphical models directly, has been receiving increasing attention. Most lifted inference methods to date need to process the entire given model before they can provide information on a query’s answer, even if most of it is determined by a relatively small, local portion of the model. Anytime Lifted Belief Propagation (ALBP) performs Lifted Belief Propagation but, instead of first building a supernode network based on the entire model, incrementally processes the model on an asneeded basis, keeping a guaranteed bound on the query’s answer the entire time. This allows a user to either detect when the answer has been already determined, before actually processing the entire model, or to choose to stop when the bound is narrow enough for the application at hand. Moreover, the bounds can be made to converge to the exact solution when inference has processed the entire model. This paper shows some preliminary results of an implementation of ALBP, illustrating how bounds can sometimes be narrowed a lot sooner than it would take to get the exact answer. 1
unknown title
, 2010
"... Copyright & reuse City University London has developed City Research Online so that its users may access the research outputs of City University London's staff. Copyright © and Moral Rights for this paper are retained by the individual author(s) and / or other copyright holders. All materia ..."
Abstract
 Add to MetaCart
Copyright & reuse City University London has developed City Research Online so that its users may access the research outputs of City University London's staff. Copyright © and Moral Rights for this paper are retained by the individual author(s) and / or other copyright holders. All material in City Research Online is checked for eligibility for copyright before being made available in the live archive. URLs from City Research Online may be freely distributed and linked to from other web pages. Versions of research The version in City Research Online may differ from the final published version. Users are advised to check the Permanent City Research Online URL above for the status of the paper. Enquiries If you have any enquiries about any aspect of City Research Online, or if you wish to make contact with the author(s) of this paper, please email the team at publications@city.ac.uk.Neurons and Symbols: A Manifesto Artur S. d’Avila Garcez We discuss the purpose of neuralsymbolic integration including its principles, mechanisms and applications. We outline a cognitive computational model for neuralsymbolic integration, position the model in the broader context of multiagent systems, machine learning and automated reasoning, and list some of the challenges for the area of neuralsymbolic computation to achieve the promise of effective integration of robust learning and expressive reasoning under uncertainty.
Lifted TreeReweighted Variational Inference Hung Hai Bui Natural Language Understanding Lab Nuance Communications
"... We analyze variational inference for highly symmetric graphical models such as those arising from firstorder probabilistic models. We first show that for these graphical models, the treereweighted variational objective lends itself to a compact lifted formulation which can be solved much more eff ..."
Abstract
 Add to MetaCart
(Show Context)
We analyze variational inference for highly symmetric graphical models such as those arising from firstorder probabilistic models. We first show that for these graphical models, the treereweighted variational objective lends itself to a compact lifted formulation which can be solved much more efficiently than the standard TRW formulation for the ground graphical model. Compared to earlier work on lifted belief propagation, our formulation leads to a convex optimization problem for lifted marginal inference and provides an upper bound on the partition function. We provide two approaches for improving the lifted TRW upper bound. The first is a method for efficiently computing maximum spanning trees in highly symmetric graphs, which can be used to optimize the TRW edge appearance probabilities. The second is a method for tightening the relaxation of the marginal polytope using lifted cycle inequalities and novel exchangeable cluster consistency constraints. 1
Lifted Graphical Models: A Survey
, 2004
"... Lifted graphical models provide a language for expressing dependencies between different types of entities, their attributes, and their diverse relations, as well as techniques for probabilistic reasoning in such multirelational domains. In this survey, we review a general form for a lifted graphic ..."
Abstract
 Add to MetaCart
Lifted graphical models provide a language for expressing dependencies between different types of entities, their attributes, and their diverse relations, as well as techniques for probabilistic reasoning in such multirelational domains. In this survey, we review a general form for a lifted graphical model, a parfactor graph, and show how a number of existing statistical relational representations map to this formalism. We discuss inference algorithms, including lifted inference algorithms, that efficiently compute the answers to probabilistic queries over such models. We also review work in learning lifted graphical models from data. There is a growing need for statistical relational models (whether they go by that name or another), as we are inundated with data which is a mix of structured and unstructured, with entities and relations extracted in a noisy manner from text, and with the need to reason effectively with this data. We hope that this synthesis of ideas from many different research groups will provide an accessible starting point for new researchers in this expanding field.