• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 6,308
Next 10 →

Loopy belief propagation for approximate inference: An empirical study. In:

by Kevin P Murphy , Yair Weiss , Michael I Jordan - Proceedings of Uncertainty in AI, , 1999
"... Abstract Recently, researchers have demonstrated that "loopy belief propagation" -the use of Pearl's polytree algorithm in a Bayesian network with loops -can perform well in the context of error-correcting codes. The most dramatic instance of this is the near Shannon-limit performanc ..."
Abstract - Cited by 676 (15 self) - Add to MetaCart
-limit performance of "Turbo Codes" -codes whose decoding algorithm is equivalent to loopy belief propagation in a chain-structured Bayesian network. In this paper we ask: is there something spe cial about the error-correcting code context, or does loopy propagation work as an ap proximate inference scheme

Approximate Inference

by Dr. Volkan Cevher, Ryan Guerra, Beth Bower, Terrance Savitsky , 2008
"... When we left off with the Joint Tree Algorithm and the Max-Sum Algorithm last class, we had crafted “messages ” to transverse a tree-structured graphical model in order to calculate marginal and joint distributions. We are interested in finding p(z|x) when p(x) is given as shown below. z x Figure 1. ..."
Abstract - Add to MetaCart
inference about p(z|x), this problem is often either impossible to solve or the required algorithm is intractable. The next few lectures will focus on deterministic approximations to a pdf and then we will move on to stochastic approximations. The general hierarchy of approximation techniques is given here

Approximate Inference

by Max Welling Gatsby, Max Welling
"... Introduction Until now we have studied models that were all analytically tractible. It might have occurred to you that this basically implies using a Gaussian density for continuous variables, or using discrete random variables, usually distributed according to a multinomial density. The reason is ..."
Abstract - Add to MetaCart
Introduction Until now we have studied models that were all analytically tractible. It might have occurred to you that this basically implies using a Gaussian density for continuous variables, or using discrete random variables, usually distributed according to a multinomial density. The reason is that in order to calculate the E-step in the EM algorithm we need to integrate or sum over the hidden states. For some models summing over discrete states is computationally feasible. For other models the number of discrete hidden states is simply too large. In the case of continuous variables, integrations over Gaussians is one of the few that we know how to do analytically 1 . We have also seen examples where we combined normal random variables and discrete ones (i.e. MoG, MoE, HMM). In addition a Gaussian also allows a simple M-step, since we are really solving a weighted least squares problem. To go beyond these tractible models we would need alternative iterati

Structured learning with approximate inference

by Alex Kulesza, O Pereira - Advances in Neural Information Processing Systems
"... In many structured prediction problems, the highest-scoring labeling is hard to compute exactly, leading to the use of approximate inference methods. However, when inference is used in a learning algorithm, a good approximation of the score may not be sufficient. We show in particular that learning ..."
Abstract - Cited by 79 (2 self) - Add to MetaCart
In many structured prediction problems, the highest-scoring labeling is hard to compute exactly, leading to the use of approximate inference methods. However, when inference is used in a learning algorithm, a good approximation of the score may not be sufficient. We show in particular that learning

Approximate inference and protein folding

by Chen Yanover, Yair Weiss - Proceedings of NIPS 2002 , 2002
"... Side-chain prediction is an important subtask in the protein-folding problem. We show that finding a minimal energy side-chain con-figuration is equivalent to performing inference in an undirected graphical model. The graphical model is relatively sparse yet has many cycles. We used this equivalence ..."
Abstract - Cited by 72 (9 self) - Add to MetaCart
this equivalence to assess the performance of approximate inference algorithms in a real-world setting. Specifi-cally we compared belief propagation (BP), generalized BP (GBP) and naive mean field (MF). In cases where exact inference was possible, max-product BP al-ways found the global minimum of the energy

Approximate inference and constrained optimization

by Tom Heskes, Kees Albers - In 19th UAI , 2003
"... Loopy and generalized belief propagation are popular algorithms for approximate inference in Markov random fields and Bayesian networks. Fixed points of these algorithms correspond to extrema of the Bethe and Kikuchi free energy (Yedidia et al., 2001). However, belief propagation does not always con ..."
Abstract - Cited by 62 (9 self) - Add to MetaCart
Loopy and generalized belief propagation are popular algorithms for approximate inference in Markov random fields and Bayesian networks. Fixed points of these algorithms correspond to extrema of the Bethe and Kikuchi free energy (Yedidia et al., 2001). However, belief propagation does not always

The DLR Hierarchy of Approximate Inference

by Michal Rosen-Zvi Computer, Michal Rosen-zvi
"... We propose a hierarchy for approximate inference based on the Dobrushin, Lanford, Ruelle (DLR) equations. This hierarchy includes existing algorithms, such as belief propagation, and also motivates novel algorithms such as factorized neighbors (FN) algorithms and variants of mean field (MF) al ..."
Abstract - Cited by 8 (1 self) - Add to MetaCart
We propose a hierarchy for approximate inference based on the Dobrushin, Lanford, Ruelle (DLR) equations. This hierarchy includes existing algorithms, such as belief propagation, and also motivates novel algorithms such as factorized neighbors (FN) algorithms and variants of mean field (MF

Algorithms for approximated inference . . .

by Jose Carlos F. da Rocha, Cassio P. de Campos, Fabio G. Cozman
"... A credal network associates convex sets of probability distributions with graph-based models. Inference with credal networks aims at determining intervals on probability measures. Here we describe how a branch-and-bound based approach can be applied to accomplish approximated inference in polytree ..."
Abstract - Add to MetaCart
A credal network associates convex sets of probability distributions with graph-based models. Inference with credal networks aims at determining intervals on probability measures. Here we describe how a branch-and-bound based approach can be applied to accomplish approximated inference

Approximate Inference in Probabilistic Models

by Manfred Opper , Ole Winther
"... Abstract. We present a framework for approximate inference in probabilistic data models which is based on free energies. The free energy is constructed from two approximating distributions which encode different aspects of the intractable model. Consistency between distributions is required on a ch ..."
Abstract - Add to MetaCart
Abstract. We present a framework for approximate inference in probabilistic data models which is based on free energies. The free energy is constructed from two approximating distributions which encode different aspects of the intractable model. Consistency between distributions is required on a

Expectation consistent approximate inference

by Manfred Opper, Ole Winther - JOURNAL OF MACHINE LEARNING RESEARCH , 2005
"... We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability distri ..."
Abstract - Cited by 33 (5 self) - Add to MetaCart
We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability
Next 10 →
Results 1 - 10 of 6,308
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University