Results 1  10
of
17
libDAI: A free/open source C++ library for discrete approximate inference methods
, 2008
"... This paper describes the software package libDAI, a free & open source C++ library that provides implementations of various exact and approximate inference methods for graphical models with discretevalued variables. libDAI supports directed graphical models (Bayesian networks) as well as undire ..."
Abstract

Cited by 78 (1 self)
 Add to MetaCart
This paper describes the software package libDAI, a free & open source C++ library that provides implementations of various exact and approximate inference methods for graphical models with discretevalued variables. libDAI supports directed graphical models (Bayesian networks) as well as undirected ones (Markov random fields and factor graphs). It offers various approximations of the partition sum, marginal probability distributions and maximum probability states. Parameter learning is also supported. A feature comparison with other open source software packages for approximate inference is given. libDAI is licensed under the GPL v2+ license and is available at
Loop corrected belief propagation
 In Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics (AISTATS07
, 2007
"... We propose a method for improving Belief Propagation (BP) that takes into account the influence of loops in the graphical model. The method is a variation on and generalization of the method recently introduced by Montanari and Rizzo [1]. It consists of two steps: (i) standard BP is used to calculat ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
(Show Context)
We propose a method for improving Belief Propagation (BP) that takes into account the influence of loops in the graphical model. The method is a variation on and generalization of the method recently introduced by Montanari and Rizzo [1]. It consists of two steps: (i) standard BP is used to calculate cavity distributions for each variable (i.e. probability distributions on the Markov blanket of a variable for a modified graphical model, in which the factors involving that variable have been removed); (ii) all cavity distributions are combined by a messagepassing algorithm to obtain consistent single node marginals. The method is exact if the graphical model contains a single loop. The complexity of the method is exponential in the size of the Markov blankets. The results are very accurate in general: the error is often several orders of magnitude smaller than that of standard BP, as illustrated by numerical experiments.
Loop corrections for approximate inference on factor graphs
 Journal of Machine Learning Research
"... We propose a method to improve approximate inference methods by correcting for the influence of loops in the graphical model. The method is a generalization and alternative implementation of a recent idea from Montanari and Rizzo (2005). It is applicable to arbitrary factor graphs, provided that the ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
We propose a method to improve approximate inference methods by correcting for the influence of loops in the graphical model. The method is a generalization and alternative implementation of a recent idea from Montanari and Rizzo (2005). It is applicable to arbitrary factor graphs, provided that the size of the Markov blankets is not too large. It consists of two steps: (i) an approximate inference method, for example, belief propagation, is used to approximate cavity distributions for each variable (i.e., probability distributions on the Markov blanket of a variable for a modified graphical model in which the factors involving that variable have been removed); (ii) all cavity distributions are improved by a messagepassing algorithm that cancels out approximation errors by imposing certain consistency constraints. This loop correction (LC) method usually gives significantly better results than the original, uncorrected, approximate inference algorithm that is used to estimate the effect of loops. Indeed, we often observe that the loopcorrected error is approximately the square of the error of the uncorrected approximate inference method. In this article, we compare different variants of the loop correction method with other approximate inference methods on a variety of graphical models, including “real world ” networks, and conclude that the LC method generally obtains the most accurate results.
Choosing a variable to clamp: Approximate inference using conditioned belief propagation
 In Proceedings of AISTATS
, 2009
"... In this paper we propose an algorithm for approximate inference on graphical models based on belief propagation (BP). Our algorithm is an approximate version of Cutset Conditioning, in which a subset of variables is instantiated to make the rest of the graph singly connected. We relax the constraint ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
In this paper we propose an algorithm for approximate inference on graphical models based on belief propagation (BP). Our algorithm is an approximate version of Cutset Conditioning, in which a subset of variables is instantiated to make the rest of the graph singly connected. We relax the constraint of singleconnectedness, and select variables one at a time for conditioning, running belief propagation after each selection. We consider the problem of determining the best variable to clamp at each level of recursion, and propose a fast heuristic which applies backpropagation to the BP updates. We demonstrate that the heuristic performs better than selecting variables at random, and give experimental results which show that it performs competitively with existing approximate inference algorithms. 1
Inference in the Promedas Medical Expert System
"... Abstract. In the current paper, the Promedas model for internal medicine, developed by our team, is introduced. The model is based on uptodate medical knowledge and consists of approximately 2000 diagnoses, 1000 findings and 8600 connections between diagnoses and findings, covering a large part of ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In the current paper, the Promedas model for internal medicine, developed by our team, is introduced. The model is based on uptodate medical knowledge and consists of approximately 2000 diagnoses, 1000 findings and 8600 connections between diagnoses and findings, covering a large part of internal medicine. We show that Belief Propagation (BP) can be successfully applied as approximate inference algorithm in the Promedas network. In some cases, however, we find errors that are too large for this application. We apply a recently developed method that improves the BP results by means of a loop expansion scheme. This method, termed Loop Corrected (LC) BP, is able to improve the marginal probabilities significantly, leaving a remaining error which is acceptable for the purpose of medical diagnosis. 1
Model Reductions for Inference: Generality of Pairwise, Binary, and Planar Factor Graphs
, 2013
"... We offer a solution to the problem of efficiently translating algorithms between different types of discrete statistical model. We investigate the expressive power of three classes of model—those with binary variables, with pairwise factors, and with planar topology—as well as their four intersectio ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We offer a solution to the problem of efficiently translating algorithms between different types of discrete statistical model. We investigate the expressive power of three classes of model—those with binary variables, with pairwise factors, and with planar topology—as well as their four intersections. We formalize a notion of “simple reduction ” for the problem of inferring marginal probabilities and consider whether it is possible to “simply reduce ” marginal inference from general discrete factor graphs to factor graphs in each of these seven subclasses. We characterize the reducibility of each class, showing in particular that the class of binary pairwise factor graphs is able to simply reduce only positive models. We also exhibit a continuous “spectral reduction” based on polynomial interpolation, which overcomes this limitation. Experiments assess the performance of standard approximate inference algorithms on the outputs of our reductions.
STATS 375 Project Proposal
, 2011
"... Belief propagation is a messagepassing algorithm for performing inference in graphical models that has been implemented in a variety of domains including artificial intelligence, information theory and statistical physics. As we know, BP provides exact results only on a relatively restrictive subcl ..."
Abstract
 Add to MetaCart
(Show Context)
Belief propagation is a messagepassing algorithm for performing inference in graphical models that has been implemented in a variety of domains including artificial intelligence, information theory and statistical physics. As we know, BP provides exact results only on a relatively restrictive subclass of problems, when the graphical model is a tree (or more generally if each connected component of the graph is a tree). In all
Journal of Statistical Mechanics: An IOP and SISSA journal LETTER Theory and Experiment
"... ..."
(Show Context)