Results 1  10
of
721,826
Loopy Belief Propagation for Approximate Inference: An Empirical Study
 In Proceedings of Uncertainty in AI
, 1999
"... Recently, researchers have demonstrated that "loopy belief propagation"  the use of Pearl's polytree algorithm in a Bayesian network with loops  can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performa ..."
Abstract

Cited by 680 (18 self)
 Add to MetaCart
limit performance of "Turbo Codes"  codes whose decoding algorithm is equivalent to loopy belief propagation in a chainstructured Bayesian network. In this paper we ask: is there something special about the errorcorrecting code context, or does loopy propagation work as an approximate
Approximate Inference
, 2008
"... When we left off with the Joint Tree Algorithm and the MaxSum Algorithm last class, we had crafted “messages ” to transverse a treestructured graphical model in order to calculate marginal and joint distributions. We are interested in finding p(zx) when p(x) is given as shown below. z x Figure 1. ..."
Abstract
 Add to MetaCart
inference about p(zx), this problem is often either impossible to solve or the required algorithm is intractable. The next few lectures will focus on deterministic approximations to a pdf and then we will move on to stochastic approximations. The general hierarchy of approximation techniques is given here
Approximate Inference
"... Introduction Until now we have studied models that were all analytically tractible. It might have occurred to you that this basically implies using a Gaussian density for continuous variables, or using discrete random variables, usually distributed according to a multinomial density. The reason is ..."
Abstract
 Add to MetaCart
Introduction Until now we have studied models that were all analytically tractible. It might have occurred to you that this basically implies using a Gaussian density for continuous variables, or using discrete random variables, usually distributed according to a multinomial density. The reason is that in order to calculate the Estep in the EM algorithm we need to integrate or sum over the hidden states. For some models summing over discrete states is computationally feasible. For other models the number of discrete hidden states is simply too large. In the case of continuous variables, integrations over Gaussians is one of the few that we know how to do analytically 1 . We have also seen examples where we combined normal random variables and discrete ones (i.e. MoG, MoE, HMM). In addition a Gaussian also allows a simple Mstep, since we are really solving a weighted least squares problem. To go beyond these tractible models we would need alternative iterati
Structured learning with approximate inference
 Advances in Neural Information Processing Systems
"... In many structured prediction problems, the highestscoring labeling is hard to compute exactly, leading to the use of approximate inference methods. However, when inference is used in a learning algorithm, a good approximation of the score may not be sufficient. We show in particular that learning ..."
Abstract

Cited by 81 (2 self)
 Add to MetaCart
In many structured prediction problems, the highestscoring labeling is hard to compute exactly, leading to the use of approximate inference methods. However, when inference is used in a learning algorithm, a good approximation of the score may not be sufficient. We show in particular that learning
Approximate inference and protein folding
 Proceedings of NIPS 2002
, 2002
"... Sidechain prediction is an important subtask in the proteinfolding problem. We show that finding a minimal energy sidechain configuration is equivalent to performing inference in an undirected graphical model. The graphical model is relatively sparse yet has many cycles. We used this equivalence ..."
Abstract

Cited by 73 (8 self)
 Add to MetaCart
this equivalence to assess the performance of approximate inference algorithms in a realworld setting. Specifically we compared belief propagation (BP), generalized BP (GBP) and naive mean field (MF). In cases where exact inference was possible, maxproduct BP always found the global minimum of the energy
Approximate inference and constrained optimization
 In 19th UAI
, 2003
"... Loopy and generalized belief propagation are popular algorithms for approximate inference in Markov random fields and Bayesian networks. Fixed points of these algorithms correspond to extrema of the Bethe and Kikuchi free energy (Yedidia et al., 2001). However, belief propagation does not always con ..."
Abstract

Cited by 62 (8 self)
 Add to MetaCart
Loopy and generalized belief propagation are popular algorithms for approximate inference in Markov random fields and Bayesian networks. Fixed points of these algorithms correspond to extrema of the Bethe and Kikuchi free energy (Yedidia et al., 2001). However, belief propagation does not always
The DLR Hierarchy of Approximate Inference
"... We propose a hierarchy for approximate inference based on the Dobrushin, Lanford, Ruelle (DLR) equations. This hierarchy includes existing algorithms, such as belief propagation, and also motivates novel algorithms such as factorized neighbors (FN) algorithms and variants of mean field (MF) al ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We propose a hierarchy for approximate inference based on the Dobrushin, Lanford, Ruelle (DLR) equations. This hierarchy includes existing algorithms, such as belief propagation, and also motivates novel algorithms such as factorized neighbors (FN) algorithms and variants of mean field (MF
Approximate inference in probabilistic models
"... Abstract. We present a framework for approximate inference in probabilistic data models which is based on free energies. The free energy is constructed from two approximating distributions which encode different aspects of the intractable model. Consistency between distributions is required on a cho ..."
Abstract
 Add to MetaCart
Abstract. We present a framework for approximate inference in probabilistic data models which is based on free energies. The free energy is constructed from two approximating distributions which encode different aspects of the intractable model. Consistency between distributions is required on a
Approximate inference for the losscalibrated Bayesian
"... We consider the problem of approximate inference in the context of Bayesian decision theory. Traditional approaches focus on approximating general properties of the posterior, ignoring the decision task – and associated losses – for which the posterior could be used. We argue that this can be subopt ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We consider the problem of approximate inference in the context of Bayesian decision theory. Traditional approaches focus on approximating general properties of the posterior, ignoring the decision task – and associated losses – for which the posterior could be used. We argue that this can
Approximate Inference for Medical Diagnosis
, 1999
"... Computerbased diagnostic decision support systems (DSS) will play an increasingly important role in health care. Due to the inherent probabilistic nature of medical diagnosis, a DSS should preferably be based on a probabilistic model. In particular Bayesian networks provide a powerful and conceptua ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
and conceptually transparent formalism for probabilistic modeling. A drawback is that Bayesian networks become intractable for exact computation if a large medical domain would be modeled in detail. This has obstructed the development of a useful system for internal medicine. Advances in approximation techniques
Results 1  10
of
721,826