Results 1  10
of
18
Symmetryaware marginal density estimation
 In Proceedings of the 27th Conference on Artificial Intelligence (AAAI
, 2013
"... The RaoBlackwell theorem is utilized to analyze and improve the scalability of inference in large probabilistic models that exhibit symmetries. A novel marginal density estimator is introduced and shown both analytically and empirically to outperform standard estimators by several orders of magni ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
The RaoBlackwell theorem is utilized to analyze and improve the scalability of inference in large probabilistic models that exhibit symmetries. A novel marginal density estimator is introduced and shown both analytically and empirically to outperform standard estimators by several orders of magnitude. The developed theory and algorithms apply to a broad class of probabilistic models including statistical relational models considered not susceptible to lifted probabilistic inference.
Efficient Lifting of MAP LP Relaxations Using kLocality
"... Inference in large scale graphical models is an important task in many domains, and in particular for probabilistic relational models (e.g,. Markov logic networks). Such models often exhibit considerable symmetry, and it is a challenge to devise algorithms that exploit this symmetry to speed up infe ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Inference in large scale graphical models is an important task in many domains, and in particular for probabilistic relational models (e.g,. Markov logic networks). Such models often exhibit considerable symmetry, and it is a challenge to devise algorithms that exploit this symmetry to speed up inference. Here we address this task in the context of the MAP inference problem and its linear programming relaxations. We show that symmetry in these problems can be discovered using an elegant algorithm known as the kdimensional WeisfeilerLehman (kWL) algorithm. We run kWL on the original graphical model, and not on the far larger graph of the linear program (LP) as proposed in earlier work in the field. Furthermore, the algorithm is polynomial and thus far more practical than other previous approaches which rely on orbit partitions that are GI complete to find. The fact that kWL can be used in this manner follows from the recently introduced notion of klocal LPs and their relation to Sherali Adams relaxations of graph automorphisms. Finally, for relational models such as Markov logic networks, the benefits of our approach are even more dramatic, as we can discover symmetries in the original domain graph, as opposed to running lifting on the much larger grounded model. 1
Tractability through exchangeability: A new perspective on efficient probabilistic inference
 In Proceedings of the 28th AAAI Conference on Artificial Intelligence (AAAI
, 2014
"... ar ..."
Skolemization for weighted firstorder model counting. arXiv preprint arXiv:1312.5378
, 2013
"... ar ..."
New Rules for Domain Independent Lifted MAP Inference
"... Lifted inference algorithms for probabilistic firstorder logic frameworks such as Markov logic networks (MLNs) have received significant attention in recent years. These algorithms use so called lifting rules to identify symmetries in the firstorder representation and reduce the inference problem ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Lifted inference algorithms for probabilistic firstorder logic frameworks such as Markov logic networks (MLNs) have received significant attention in recent years. These algorithms use so called lifting rules to identify symmetries in the firstorder representation and reduce the inference problem over a large probabilistic model to an inference problem over a much smaller model. In this paper, we present two new lifting rules, which enable fast MAP inference in a large class of MLNs. Our first rule uses the concept of single occurrence equivalence class of logical variables, which we define in the paper. The rule states that the MAP assignment over an MLN can be recovered from a much smaller MLN, in which each logical variable in each single occurrence equivalence class is replaced by a constant (i.e., an object in the domain of the variable). Our second rule states that we can safely remove a subset of formulas from the MLN if all equivalence classes of variables in the remaining MLN are single occurrence and all formulas in the subset are tautology (i.e., evaluate to true) at extremes (i.e., assignments with identical truth value for groundings of a predicate). We prove that our two new rules are sound and demonstrate via a detailed experimental evaluation that our approach is superior in terms of scalability and MAP solution quality to the state of the art approaches. 1
Exchangeable variable models
 In Proceedings of ICML
, 2014
"... A sequence of random variables is exchangeable if its joint distribution is invariant under variable permutations. We introduce exchangeable variable models (EVMs) as a novel class of probabilistic models whose basic building blocks are partially exchangeable sequences, a generalization of exchan ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
A sequence of random variables is exchangeable if its joint distribution is invariant under variable permutations. We introduce exchangeable variable models (EVMs) as a novel class of probabilistic models whose basic building blocks are partially exchangeable sequences, a generalization of exchangeable sequences. We prove that a family of tractable EVMs is optimal under zeroone loss for a large class of functions, including parity and threshold functions, and strictly subsumes existing tractable independencebased model families. Extensive experiments show that EVMs outperform state of the art classifiers such as SVMs and probabilistic models which are solely based on independence assumptions. 1.
Reduce and ReLift: Bootstrapped Lifted Likelihood Maximization for MAP
"... By handling whole sets of indistinguishable objects together, lifted belief propagation approaches have rendered large, previously intractable, probabilistic inference problems quickly solvable. In this paper, we show that Kumar and Zilberstein’s likelihood maximization (LM) approach to MAP inferenc ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
By handling whole sets of indistinguishable objects together, lifted belief propagation approaches have rendered large, previously intractable, probabilistic inference problems quickly solvable. In this paper, we show that Kumar and Zilberstein’s likelihood maximization (LM) approach to MAP inference is liftable, too, and actually provides additional structure for optimization. Specifically, it has been recognized that some pseudo marginals may converge quickly, turning intuitively into pseudo evidence. This additional evidence typically changes the structure of the lifted network: it may expand or reduce it. The current lifted network, however, can be viewed as an upper bound on the size of the lifted network required to finish likelihood maximization. Consequently, we relift the network only if the pseudo evidence yields a reduced network, which can efficiently be computed on the current lifted network. Our experimental results on Ising models, image segmentation and relational entity resolution demonstrate that this bootstrapped LM via “reduce and relift ” finds MAP assignments comparable to those found by the original LM approach, but in a fraction of the time.
Understanding the Complexity of Lifted Inference and Asymmetric Weighted Model Counting
"... In this paper we study lifted inference for the Weighted FirstOrder Model Counting problem (WFOMC), which counts the assignments that satisfy a given sentence in firstorder logic (FOL); it has applications in Statistical Relational Learning (SRL) and Probabilistic Databases (PDB). We present se ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
In this paper we study lifted inference for the Weighted FirstOrder Model Counting problem (WFOMC), which counts the assignments that satisfy a given sentence in firstorder logic (FOL); it has applications in Statistical Relational Learning (SRL) and Probabilistic Databases (PDB). We present several results. First, we describe a lifted inference algorithm that generalizes prior approaches in SRL and PDB. Second, we provide a novel dichotomy result for a nontrivial fragment of FO CNF sentences, showing that for each sentence the WFOMC problem is either in PTIME or #Phard in the size of the input domain; we prove that, in the first case our algorithm solves the WFOMC problem in PTIME, and in the second case it fails. Third, we present several properties of the algorithm. Finally, we discuss limitations of lifted inference for symmetric probabilistic databases (where the weights of ground literals depend only on the relation name, and not on the constants of the domain), and prove the impossibility of a dichotomy result for the complexity of probabilistic inference for the entire language FOL. 1
Lifted Inference via kLocality
"... Lifted inference approaches exploit symmetries of a graphical model. So far, only the automorphism group of the graphical model has been proposed to formalize the symmetries used. We show that this is only the GIcomplete tip of a hierarchy and that the amount of lifting depends on how local the inf ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Lifted inference approaches exploit symmetries of a graphical model. So far, only the automorphism group of the graphical model has been proposed to formalize the symmetries used. We show that this is only the GIcomplete tip of a hierarchy and that the amount of lifting depends on how local the inference algorithm is: if the LP relaxation introduces constraints involving features over at most k variables, then the amount of lifting decreases monotonically with k. This induces a hierarchy of lifted inference algorithms, with lifted BP and MPLP at the bottom and exact inference methods at the top. In between, there are relaxations whose liftings are equitable partitions of intermediate coarseness, which all can be computed in polynomial time.
ACKNOWLEDGMENTS
, 2015
"... I wish to thank my advisor Vibhav Gogate without whom this dissertation would not have been possible. Vibhav has been a great advisor who inspired me to work hard and maintain high standards of research by setting a fine example. He has shown me how to think and write with clarity, and always seemed ..."
Abstract
 Add to MetaCart
I wish to thank my advisor Vibhav Gogate without whom this dissertation would not have been possible. Vibhav has been a great advisor who inspired me to work hard and maintain high standards of research by setting a fine example. He has shown me how to think and write with clarity, and always seemed to have an idea whenever I have been struck with seemingly unsolvable problems. Best of all, he has always been patient, friendly and approachable. I would certainly hope to emulate some of his traits as I embark on my own academic career. Next, I wish to thank members of my dissertation committee, Dr. Gopal Gupta, Dr. Sanda Harabagiu, Dr. Ray Mooney and Dr. Vincent Ng, for the time they have taken not only for my dissertation but also in my job search process. Particularly, Dr. Mooney was kind enough to serve on my committee from UTAustin and I am grateful for this. I also wish to thank Somdeb, Chen, Dr. Parag Singla and Dr. Vincent Ng for collaborating with me on various projects due to which I gained a lot of insight into several different research areas. Needless to say, my family has supported me enormously in completing this dissertation. It is hard to thank Krithika enough for the love, patience and faith that she has shown in me during numerous ups and downs associated with Ph.D. life. Were it not for her support, I