Results 1 
8 of
8
Image Parsing: Unifying Segmentation, Detection, and Recognition
, 2005
"... In this paper we present a Bayesian framework for parsing images into their constituent visual patterns. The parsing algorithm optimizes the posterior probability and outputs a scene representation in a "parsing graph", in a spirit similar to parsing sentences in speech and natural lang ..."
Abstract

Cited by 233 (22 self)
 Add to MetaCart
In this paper we present a Bayesian framework for parsing images into their constituent visual patterns. The parsing algorithm optimizes the posterior probability and outputs a scene representation in a "parsing graph", in a spirit similar to parsing sentences in speech and natural language. The algorithm constructs the parsing graph and reconfigures it dynamically using a set of reversible Markov chain jumps. This computational framework integrates two popular inference approaches  generative (topdown) methods and discriminative (bottomup) methods. The former formulates the posterior probability in terms of generative models for images defined by likelihood functions and priors. The latter computes discriminative probabilities based on a sequence (cascade) of bottomup tests/filters.
Piecewise Training for Structured Prediction
 MACHINE LEARNING
"... A drawback of structured prediction methods is that parameter estimation requires repeated inference, which is intractable for general structures. In this paper, we present an approximate training algorithm called piecewise training that divides the factors into tractable subgraphs, which we call pi ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
A drawback of structured prediction methods is that parameter estimation requires repeated inference, which is intractable for general structures. In this paper, we present an approximate training algorithm called piecewise training that divides the factors into tractable subgraphs, which we call pieces, that are trained independently. Piecewise training can be interpreted as approximating the exact likelihood using belief propagation, and different ways of making this interpretation yield different insights into the method. We also present an extension to piecewise training, called piecewise pseudolikelihood, designed for when variables have large cardinality. On several realworld NLP data sets, piecewise training performs superior to Besag’s pseudolikelihood and sometimes comparably to exact maximum likelihood. In addition, PWPL performs similarly to piecewise and superior to standard pseudolikelihood, but is five to ten times more computationally efficient than batch maximum likelihood training.
Approximate Mean Field for DirichletBased Models
"... Variational inference is an important class of approximate inference techniques that has been applied to many graphical models, including topic models. We propose to improve the efficiency of mean field inference for Dirichletbased models by introducing an approximative framework that converts weig ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Variational inference is an important class of approximate inference techniques that has been applied to many graphical models, including topic models. We propose to improve the efficiency of mean field inference for Dirichletbased models by introducing an approximative framework that converts weighted geometric means in the updates into weighted arithmetic means. This paper also discusses a close resemblance between our approach and other methods, such as the factorized neighbors algorithm and belief propagation. Empirically, we find that our approach is accurate and efficient compared to standard mean field. 1.
1 BELIEF PROPAGATION, MEANFIELD, AND BETHE APPROXIMATIONS
"... This chapter describes methods for estimating the marginals and maximum a posteriori (MAP) estimates of probability distributions defined over graphs by approximate methods including Mean Field Theory (MFT), variational methods, and belief propagation. These methods typically formulate this problem ..."
Abstract
 Add to MetaCart
(Show Context)
This chapter describes methods for estimating the marginals and maximum a posteriori (MAP) estimates of probability distributions defined over graphs by approximate methods including Mean Field Theory (MFT), variational methods, and belief propagation. These methods typically formulate this problem in terms of minimizing a free energy function of pseudomarginals. They differ by the design of the free energy and the choice of algorithm to minimize it. These algorithms can often be interpreted in terms of message passing. In many cases, the free energy has a dual formulation and the algorithms are defined over the dual variables (e.g., the messages in belief propagation). The quality of performance depends on the types of free energies used – specifically how well they approximate the log partition function of the probability distribution – and whether there are suitable algorithms for finding their minima. We start in section (II) by introducing two types of Markov Field models that are often used in computer vision. We proceed to define MFT/variational methods in section (III), whose free energies are lower bounds of the log partition function, and describe how inference can be done by expectationmaximization, steepest descent, or discrete iterative algorithms. The following section (IV) describes message passing algorithms, such as belief propagation and its generalizations, which can be related to free energy functions (and dual variables). Finally in section (V) we describe how these methods relate to Markov Chain Monte Carlo
0.2 Section Draft 1 Belief Propagation, Meanfield, and Bethe
"... This chapter describes methods for estimating the marginals and maximum a posteriori (MAP) estimates of probability distributions defined over graphs by approximate methods including Mean Field Theory (MFT), variational methods, and belief propagation. These methods typically formulate this problem ..."
Abstract
 Add to MetaCart
(Show Context)
This chapter describes methods for estimating the marginals and maximum a posteriori (MAP) estimates of probability distributions defined over graphs by approximate methods including Mean Field Theory (MFT), variational methods, and belief propagation. These methods typically formulate this problem in terms of minimizing a free energy function of pseudomarginals. They differ by the design of the free energy and the choice of algorithm to minimize it. These algorithms can often be interpreted in terms of message passing. In many cases, the free energy has a dual formulation and the algorithms are defined over the dual variables (e.g., the messages in belief propagation). The quality of performance depends on the types of free energies used – specifically how well they approximate the log partition function of the probability distribution – and whether there are suitable algorithms for finding their minima. We start in section (II) by introducing two types of Markov Field models that are often used in computer vision. We proceed
0.2 Section Draft 1 Belief Propagation, Meanfield, and Bethe
"... This chapter describes methods for estimating the marginals and maximum a posteriori (MAP) estimates of probability distributions defined over graphs by approximate methods including Mean Field Theory (MFT), variational methods, and belief propagation. These methods typically formulate this problem ..."
Abstract
 Add to MetaCart
(Show Context)
This chapter describes methods for estimating the marginals and maximum a posteriori (MAP) estimates of probability distributions defined over graphs by approximate methods including Mean Field Theory (MFT), variational methods, and belief propagation. These methods typically formulate this problem in terms of minimizing a free energy function of pseudomarginals. They differ by the design of the free energy and the choice of algorithm to minimize it. These algorithms can often be interpreted in terms of message passing. In many cases, the free energy has a dual formulation and the algorithms are defined over the dual variables (e.g., the messages in belief propagation). The quality of performance depends on the types of free energies used – specifically how well they approximate the log partition function of the probability distribution – and whether there are suitable algorithms for finding their minima. We start in section (II) by introducing two types of Markov Field models that are often used in computer vision. We proceed
Lecture 12: Binocular Stereo and Belief Propagation
, 2012
"... Binocular Stereo is the process of estimating threedimensional shape (stereo) from two eyes (binocular) or two cameras. Usually it is just called stereo. It requires knowing the camera parameters (e.g., focal length, direction of gaze), as discussed in earlier lecture, and solving the correspondenc ..."
Abstract
 Add to MetaCart
(Show Context)
Binocular Stereo is the process of estimating threedimensional shape (stereo) from two eyes (binocular) or two cameras. Usually it is just called stereo. It requires knowing the camera parameters (e.g., focal length, direction of gaze), as discussed in earlier lecture, and solving the correspondence problem – matching pixels between the left and right images. After correspondence has been solved then depth can be estimated by