Results 1 
8 of
8
Markov Random Field Modeling, Inference & Learning in Computer Vision & Image Understanding: A Survey
, 2013
"... ..."
Stochastic belief propagation: A lowcomplexity alternative to the sumproduct algorithms
 COMPUTING RESEARCH REPOSITORY
, 2011
"... The belief propagation (BP) or sumproduct algorithm is a widelyused messagepassing method for computing marginal distributions in graphical models. At the core of the BP message updates, when applied to a graphical model involving discrete variables with pairwise interactions, lies a matrixvect ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
(Show Context)
The belief propagation (BP) or sumproduct algorithm is a widelyused messagepassing method for computing marginal distributions in graphical models. At the core of the BP message updates, when applied to a graphical model involving discrete variables with pairwise interactions, lies a matrixvector product with complexity that is quadratic in the state dimension d, and requires transmission of a (d − 1)dimensional vector of real numbers (messages) to its neighbors. Since various applications involve very large state dimensions, such computation and communication complexities can be prohibitively complex. In this paper, we propose a lowcomplexity variant of BP, referred to as stochastic belief propagation (SBP). As suggested by the name, it is an adaptively randomized version of the BP message updates in which each node passes randomly chosen information to each of its neighbors. The SBP message updates reduce the computational complexity (per iteration) from quadratic to linear in d, without assuming any particular structure of the potentials, and also reduce the communication complexity significantly, requiring only log 2d bits transmission per edge. Moreover, we establish a number of theoretical guarantees for the performance of SBP, showing that it converges almost surely to the BP fixed point for any treestructured graph, and for any graph with cycles satisfying a contractivity condition. In addition, for these graphical models, we provide nonasymptotic upper bounds on the convergence rate, showing that the ℓ ∞ norm of the error vector decays no slower than O ( 1 / √ t) with the number of iterations t on trees and the normalized meansquared error decays as O ( 1/t) for general graphs. This analysis, also supported by experimental results, shows that SBP can provably yield reductions in computational and communication complexities for various classes of graphical models.
Fast bmatching via sufficient selection belief propagation
 In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics
, 2011
"... Abstract This article describes scalability enhancements to a previously established belief propagation algorithm that solves bipartite maximum weight bmatching. The previous algorithm required O(V  + E) space and O(V E) time, whereas we apply improvements to reduce the space to O(V ) an ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
Abstract This article describes scalability enhancements to a previously established belief propagation algorithm that solves bipartite maximum weight bmatching. The previous algorithm required O(V  + E) space and O(V E) time, whereas we apply improvements to reduce the space to O(V ) and provide runningtime speedups that reduce empirical running time to approximately O(V  2.5 ). The space improvement is most significant in cases where edge weights are determined by a function of node descriptors, such as a distance or kernel function. In practice, we demonstrate maximum weight bmatchings to be solvable on graphs with hundreds of millions of edges in only a few hours of compute time on a modern personal computer without parallelization, while previously known algorithms would not have scaled as well due to the combination of memory and time requirements.
Exploiting DataIndependence for Fast BeliefPropagation
"... Maximum a posteriori (MAP) inference in graphical models requires that we maximize the sum of two terms: a datadependent term, encoding the conditional likelihood of a certain labeling given an observation, and a dataindependent term, encoding some prior on labelings. Often, datadependent factors ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Maximum a posteriori (MAP) inference in graphical models requires that we maximize the sum of two terms: a datadependent term, encoding the conditional likelihood of a certain labeling given an observation, and a dataindependent term, encoding some prior on labelings. Often, datadependent factors contain fewer latent variables than dataindependent factors – for instance, many grid and treestructured models contain only firstorder conditionals despite having pairwise priors. In this paper, we note that MAPinference in such models can be made substantially faster by appropriately preprocessing their dataindependent terms. Our main result is to show that messagepassing in any such pairwise model has an expectedcase exponent of only 1.5 on the number of states per node, leading to significant improvements over existing quadratictime solutions. 1.
A Fast and Exact Energy Minimization Algorithm for Cycle MRFs
"... The presence of cycles gives rise to the difficulty in performing inference for MRFs. Handling cycles efficiently would greatly enhance our ability to tackle general MRFs. In particular, for dual decomposition of energy minimization (MAP inference), using cycle subproblems leads to a much tighter re ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
The presence of cycles gives rise to the difficulty in performing inference for MRFs. Handling cycles efficiently would greatly enhance our ability to tackle general MRFs. In particular, for dual decomposition of energy minimization (MAP inference), using cycle subproblems leads to a much tighter relaxation than using trees, but solving the cycle subproblems turns out to be the bottleneck. In this paper, we present a fast and exact algorithm for energy minimization in cycle MRFs, which can be used as a subroutine in tackling general MRFs. Our method builds on junctiontree message passing, with a large portion of the message entries pruned for efficiency. The pruning conditions fully exploit the structure of a cycle. Experimental results show that our algorithm is more than an order of magnitude faster than other stateoftheart fast inference methods, and it performs consistently well in several different real problems. Code for the presented algorithm is available at
MAP Inference in Chains using Column Generation
"... Linear chains and trees are basic building blocks in many applications of graphical models, and they admit simple exact maximum aposteriori (MAP) inference algorithms based on message passing. However, in many cases this computation is prohibitively expensive, due to quadratic dependence on variabl ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Linear chains and trees are basic building blocks in many applications of graphical models, and they admit simple exact maximum aposteriori (MAP) inference algorithms based on message passing. However, in many cases this computation is prohibitively expensive, due to quadratic dependence on variables ’ domain sizes. The standard algorithms are inefficient because they compute scores for hypotheses for which there is strong negative local evidence. For this reason there has been significant previous interest in beam search and its variants; however, these methods provide only approximate results. This paper presents new exact inference algorithms based on the combination of column generation and precomputed bounds on terms of the model’s scoring function. While we do not improve worstcase performance, our method substantially speeds realworld, typicalcase inference in chains and trees. Experiments show our method to be twice as fast as exact Viterbi for Wall Street Journal partofspeech tagging and over thirteen times faster for a joint partofspeed and namedentityrecognition task. Our algorithm is also extendable to new techniques for approximate inference, to faster 0/1 loss oracles, and new opportunities for connections between inference and learning. We encourage further exploration of highlevel reasoning about the optimization problem implicit in dynamic programs. 1
Joint 3D Object and Layout Inference from a single RGBD Image
"... Abstract. Inferring 3D objects and the layout of indoor scenes from a single RGBD image captured with a Kinect camera is a challenging task. Towards this goal, we propose a highorder graphical model and jointly reason about the layout, objects and superpixels in the image. In contrast to existing ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Inferring 3D objects and the layout of indoor scenes from a single RGBD image captured with a Kinect camera is a challenging task. Towards this goal, we propose a highorder graphical model and jointly reason about the layout, objects and superpixels in the image. In contrast to existing holistic approaches, our model leverages detailed 3D geometry using inverse graphics and explicitly enforces occlusion and visibility constraints for respecting scene properties and projective geometry. We cast the task as MAP inference in a factor graph and solve it efficiently using message passing. We evaluate our method with respect to several baselines on the challenging NYUv2 indoor dataset using 21 object categories. Our experiments demonstrate that the proposed method is able to infer scenes with a large degree of clutter and occlusions. 1