Results 1 
8 of
8
Pushing the power of stochastic greedy ordering schemes for inference in graphical models
 IN AAAI 2011
, 2011
"... We study iterative randomized greedy algorithms for generating (elimination) orderings with small induced width and state space size two parameters known to bound the complexity of inference in graphical models. We propose and implement the Iterative Greedy Variable Ordering (IGVO) algorithm, a new ..."
Abstract

Cited by 15 (10 self)
 Add to MetaCart
We study iterative randomized greedy algorithms for generating (elimination) orderings with small induced width and state space size two parameters known to bound the complexity of inference in graphical models. We propose and implement the Iterative Greedy Variable Ordering (IGVO) algorithm, a new variant within this algorithm class. An empirical evaluation using different ranking functions and conditions of randomness, demonstrates that IGVO finds significantly better orderings than standard greedy ordering implementations when evaluated within an anytime framework. Additional order of magnitude improvements are demonstrated on a multicore system, thus further expanding the set of solvable graphical models. The experiments also confirm the superiority of the MinFill heuristic within the iterative scheme.
Exploiting Logical Structure in Lifted Probabilistic Inference
, 2010
"... Representations that combine firstorder logic and probability have been the focus of much recent research. Lifted inference algorithms for them avoid grounding out the domain, bringing benefits analogous to those of resolution theorem proving in firstorder logic. However, all lifted probabilistic ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
Representations that combine firstorder logic and probability have been the focus of much recent research. Lifted inference algorithms for them avoid grounding out the domain, bringing benefits analogous to those of resolution theorem proving in firstorder logic. However, all lifted probabilistic inference algorithms to date treat potentials as black boxes, and do not take advantage of their logical structure. As a result, inference with them is needlessly inefficient compared to the logical case. We overcome this by proposing the first lifted probabilistic inference algorithm that exploits determinism and context specific independence. In particular, we show that AND/OR search can be lifted by introducing POWER nodes in addition to the standard AND and OR nodes. Experimental tests show the benefits of our approach.
MCMC estimation of conditional probabilities in probabilistic programming languages
 In Symbolic and Quantitative Approaches
, 2013
"... Abstract. Probabilistic logic programming languages are powerful formalisms that can model complex problems where it is necessary to represent both structure and uncertainty. Using exact inference methods to compute conditional probabilities in these languages is often intractable so approximate i ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Probabilistic logic programming languages are powerful formalisms that can model complex problems where it is necessary to represent both structure and uncertainty. Using exact inference methods to compute conditional probabilities in these languages is often intractable so approximate inference techniques are necessary. This paper proposes a Markov Chain Monte Carlo algorithm for estimating conditional probabilities based on sampling from an AND/OR tree for ProbLog, a generalpurpose probabilistic logic programming language. We propose a parameterizable proposal distribution that generates the next sample in the Markov chain by probabilistically traversing the AND/OR tree from its root, which holds the evidence, to the leaves. An empirical evaluation on several different applications illustrates the advantages of our algorithm. 1
On Combining Graphbased variance reduction schemes
 in: 13th International Conference on Artificial Intelligence and Statistics
, 2010
"... In this paper, we consider two variance reduction schemes that exploit the structure of the primal graph of the graphical model: RaoBlackwellised wcutset sampling and AND/OR sampling. We show that the two schemes are orthogonal and can be combined to further reduce the variance. Our combination yi ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we consider two variance reduction schemes that exploit the structure of the primal graph of the graphical model: RaoBlackwellised wcutset sampling and AND/OR sampling. We show that the two schemes are orthogonal and can be combined to further reduce the variance. Our combination yields a new family of estimators which trade time and space with variance. We demonstrate experimentally that the new estimators are superior, often yielding an order of magnitude improvement over previous schemes on several benchmarks. 1
Distributional importance sampling for approximate weighted model counting
 Workshop on Counting Problems in CSP and
, 2008
"... Abstract. We present a sampling method to approximate the weighted model count of Boolean satisfiability problems. Our method is based on distributional importance sampling, where a subset of the variables are randomly set according to a backtrackfree distribution, and the remaining subformula is ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We present a sampling method to approximate the weighted model count of Boolean satisfiability problems. Our method is based on distributional importance sampling, where a subset of the variables are randomly set according to a backtrackfree distribution, and the remaining subformula is counted exactly. By using distributional samples (also known as RaoBlackwellised samples), we can improve the accuracy of the approximation by reducing the variance of the samples. As well, distributional sampling allows us to exploit the power of dynamic component analysis performed by stateoftheart exact counters. We discuss several techniques for providing a measure of confidence in the resulting estimates, including an analysis based on the Central Limit Theorem. Experiments on unweighted and weighted benchmarks demonstrate the promising performance of this approach. 1
Structured Message Passing
"... In this paper, we present structured message passing (SMP), a unifying framework for approximate inference algorithms that take advantage of structured representations such as algebraic decision diagrams and sparse hash tables. These representations can yield significant time and space savings over ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
In this paper, we present structured message passing (SMP), a unifying framework for approximate inference algorithms that take advantage of structured representations such as algebraic decision diagrams and sparse hash tables. These representations can yield significant time and space savings over the conventional tabular representation when the message has several identical values (contextspecific independence) or zeros (determinism) or both in its range. Therefore, in order to fully exploit the power of structured representations, we propose to artificially introduce contextspecific independence and determinism in the messages. This yields a new class of powerful approximate inference algorithms which includes popular algorithms such as clustergraph Belief propagation (BP), expectation propagation and particle BP as special cases. We show that our new algorithms introduce several interesting biasvariance tradeoffs. We evaluate these tradeoffs empirically and demonstrate that our new algorithms are more accurate and scalable than stateoftheart techniques. 1
On Parameter Tying by Quantization
"... The maximum likelihood estimator (MLE) is generally asymptotically consistent but is susceptible to overfitting. To combat this problem, regularization methods which reduce the variance at the cost of (slightly) increasing the bias are often employed in practice. In this paper, we present an alte ..."
Abstract
 Add to MetaCart
(Show Context)
The maximum likelihood estimator (MLE) is generally asymptotically consistent but is susceptible to overfitting. To combat this problem, regularization methods which reduce the variance at the cost of (slightly) increasing the bias are often employed in practice. In this paper, we present an alternative variance reduction (regularization) technique that quantizes the MLE estimates as a post processing step, yielding a smoother model having several tied parameters. We provide and prove error bounds for our new technique and demonstrate experimentally that it often yields models having higher testset loglikelihood than the ones learned using the MLE. We also propose a new importance sampling algorithm for fast approximate inference in models having several tied parameters. Our experiments show that our new inference algorithm is superior to existing approaches such as Gibbs sampling and MCSAT on models having tied parameters, learned using our quantizationbased approach.
Importance Sampling based Estimation over AND/OR Search Spaces for Graphical Models
, 2009
"... The paper introduces a family of approximate schemes that extend the process of computing sample mean in importance sampling from the conventional OR space to the AND/OR search space for graphical models. All the sample means are defined on the same set of samples and trade time with variance. At on ..."
Abstract
 Add to MetaCart
The paper introduces a family of approximate schemes that extend the process of computing sample mean in importance sampling from the conventional OR space to the AND/OR search space for graphical models. All the sample means are defined on the same set of samples and trade time with variance. At one end is the AND/OR sample tree mean which has the same time complexity as the conventional OR sample tree mean but has lower variance. At the other end is the AND/OR sample graph mean which requires more time to compute but has the lowest variance. The paper provides theoretical analysis as well as empirical evaluation demonstrating that the AND/OR sample tree and graph means are far closer to the true mean than the OR sample tree mean.