Results 1  10
of
650,663
Loopy belief propagation for approximate inference: An empirical study. In:
 Proceedings of Uncertainty in AI,
, 1999
"... Abstract Recently, researchers have demonstrated that "loopy belief propagation" the use of Pearl's polytree algorithm in a Bayesian network with loops can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performanc ..."
Abstract

Cited by 674 (15 self)
 Add to MetaCart
limit performance of "Turbo Codes" codes whose decoding algorithm is equivalent to loopy belief propagation in a chainstructured Bayesian network. In this paper we ask: is there something spe cial about the errorcorrecting code context, or does loopy propagation work as an ap proximate inference scheme
Approximate Inference
, 2008
"... When we left off with the Joint Tree Algorithm and the MaxSum Algorithm last class, we had crafted “messages ” to transverse a treestructured graphical model in order to calculate marginal and joint distributions. We are interested in finding p(zx) when p(x) is given as shown below. z x Figure 1. ..."
Abstract
 Add to MetaCart
inference about p(zx), this problem is often either impossible to solve or the required algorithm is intractable. The next few lectures will focus on deterministic approximations to a pdf and then we will move on to stochastic approximations. The general hierarchy of approximation techniques is given here
Approximate Inference
"... Introduction Until now we have studied models that were all analytically tractible. It might have occurred to you that this basically implies using a Gaussian density for continuous variables, or using discrete random variables, usually distributed according to a multinomial density. The reason is ..."
Abstract
 Add to MetaCart
Introduction Until now we have studied models that were all analytically tractible. It might have occurred to you that this basically implies using a Gaussian density for continuous variables, or using discrete random variables, usually distributed according to a multinomial density. The reason is that in order to calculate the Estep in the EM algorithm we need to integrate or sum over the hidden states. For some models summing over discrete states is computationally feasible. For other models the number of discrete hidden states is simply too large. In the case of continuous variables, integrations over Gaussians is one of the few that we know how to do analytically 1 . We have also seen examples where we combined normal random variables and discrete ones (i.e. MoG, MoE, HMM). In addition a Gaussian also allows a simple Mstep, since we are really solving a weighted least squares problem. To go beyond these tractible models we would need alternative iterati
Structured learning with approximate inference
 Advances in Neural Information Processing Systems
"... In many structured prediction problems, the highestscoring labeling is hard to compute exactly, leading to the use of approximate inference methods. However, when inference is used in a learning algorithm, a good approximation of the score may not be sufficient. We show in particular that learning ..."
Abstract

Cited by 79 (2 self)
 Add to MetaCart
In many structured prediction problems, the highestscoring labeling is hard to compute exactly, leading to the use of approximate inference methods. However, when inference is used in a learning algorithm, a good approximation of the score may not be sufficient. We show in particular that learning
Approximate inference and protein folding
 Proceedings of NIPS 2002
, 2002
"... Sidechain prediction is an important subtask in the proteinfolding problem. We show that finding a minimal energy sidechain configuration is equivalent to performing inference in an undirected graphical model. The graphical model is relatively sparse yet has many cycles. We used this equivalence ..."
Abstract

Cited by 72 (9 self)
 Add to MetaCart
this equivalence to assess the performance of approximate inference algorithms in a realworld setting. Specifically we compared belief propagation (BP), generalized BP (GBP) and naive mean field (MF). In cases where exact inference was possible, maxproduct BP always found the global minimum of the energy
Approximate inference and constrained optimization
 In 19th UAI
, 2003
"... Loopy and generalized belief propagation are popular algorithms for approximate inference in Markov random fields and Bayesian networks. Fixed points of these algorithms correspond to extrema of the Bethe and Kikuchi free energy (Yedidia et al., 2001). However, belief propagation does not always con ..."
Abstract

Cited by 62 (9 self)
 Add to MetaCart
Loopy and generalized belief propagation are popular algorithms for approximate inference in Markov random fields and Bayesian networks. Fixed points of these algorithms correspond to extrema of the Bethe and Kikuchi free energy (Yedidia et al., 2001). However, belief propagation does not always
The DLR Hierarchy of Approximate Inference
"... We propose a hierarchy for approximate inference based on the Dobrushin, Lanford, Ruelle (DLR) equations. This hierarchy includes existing algorithms, such as belief propagation, and also motivates novel algorithms such as factorized neighbors (FN) algorithms and variants of mean field (MF) al ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We propose a hierarchy for approximate inference based on the Dobrushin, Lanford, Ruelle (DLR) equations. This hierarchy includes existing algorithms, such as belief propagation, and also motivates novel algorithms such as factorized neighbors (FN) algorithms and variants of mean field (MF
Approximate inference in probabilistic models
"... We present a framework for approximate inference in probabilistic data models which is based on free energies. The free energy is constructed from two approximating distributions which encode different aspects of the intractable model. Consistency between distributions is required on a chosen set ..."
Abstract
 Add to MetaCart
We present a framework for approximate inference in probabilistic data models which is based on free energies. The free energy is constructed from two approximating distributions which encode different aspects of the intractable model. Consistency between distributions is required on a chosen set
Algorithms for approximated inference . . .
"... A credal network associates convex sets of probability distributions with graphbased models. Inference with credal networks aims at determining intervals on probability measures. Here we describe how a branchandbound based approach can be applied to accomplish approximated inference in polytree ..."
Abstract
 Add to MetaCart
A credal network associates convex sets of probability distributions with graphbased models. Inference with credal networks aims at determining intervals on probability measures. Here we describe how a branchandbound based approach can be applied to accomplish approximated inference
Expectation consistent approximate inference
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2005
"... We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability distri ..."
Abstract

Cited by 33 (5 self)
 Add to MetaCart
We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability
Results 1  10
of
650,663