• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 257
Next 10 →

Learning Real-Time MRF Inference for Image Denoising

by Adrian Barbu - In IEEE Conference on Computer Vision and Pattern Recognition , 2009
"... Many computer vision problems can be formulated in a Bayesian framework with Markov Random Field (MRF) or Conditional Random Field (CRF) priors. Usually, the model assumes that a full Maximum A Posteriori (MAP) estimation will be performed for inference, which can be really slow in practice. In this ..."
Abstract - Cited by 9 (1 self) - Add to MetaCart
Many computer vision problems can be formulated in a Bayesian framework with Markov Random Field (MRF) or Conditional Random Field (CRF) priors. Usually, the model assumes that a full Maximum A Posteriori (MAP) estimation will be performed for inference, which can be really slow in practice

Approximate MRF Inference Using Bounded Treewidth Subgraphs

by Alexander Fix, Joyce Chen, Endre Boros, Ramin Zabih
"... Graph cut algorithms [9], commonly used in computer vision, solve a first-order MRF over binary variables. The state of the art for this NP-hard problem is QPBO [1, 2], which finds the values for a subset of the variables in the global minimum. While QPBO is very effective overall there are still ..."
Abstract - Cited by 3 (0 self) - Add to MetaCart
Graph cut algorithms [9], commonly used in computer vision, solve a first-order MRF over binary variables. The state of the art for this NP-hard problem is QPBO [1, 2], which finds the values for a subset of the variables in the global minimum. While QPBO is very effective overall there are still

Efficient Semidefinite Branch-and-Cut for MAP-MRF Inference∗

by Peng Wang, Chunhua Shen, Anton Van Den Hengel, Philip Torr
"... We propose a new Branch-and-Cut (B&C) method for solving general MAP-MRF inference problems. The core of our method is a very efficient bounding procedure, which combines scalable semidefinite programming (SDP) and a cutting-plane method for seeking violated constraints. We analyze the performan ..."
Abstract - Add to MetaCart
We propose a new Branch-and-Cut (B&C) method for solving general MAP-MRF inference problems. The core of our method is a very efficient bounding procedure, which combines scalable semidefinite programming (SDP) and a cutting-plane method for seeking violated constraints. We analyze

Efficient and Exact MAP-MRF Inference using Branch and Bound

by Min Sun, Murali Telaprolu, Honglak Lee, Silvio Savarese
"... We propose two novel Branch-and-Bound (BB) methods to efficiently solve exact MAP-MRF inference on problems with a large number of states (per variable) H. By organizing the data in a suitable structure, the time complexity of our best method for evaluating the bound at each branch is reduced from O ..."
Abstract - Cited by 9 (3 self) - Add to MetaCart
We propose two novel Branch-and-Bound (BB) methods to efficiently solve exact MAP-MRF inference on problems with a large number of states (per variable) H. By organizing the data in a suitable structure, the time complexity of our best method for evaluating the bound at each branch is reduced from

Beyond Trees: MRF Inference via Outer-Planar Decomposition

by Dhruv Batra, A. C. Gallagher, Devi Parikh, Tsuhan Chen , 2010
"... Maximum a posteriori (MAP) inference in Markov Random Fields (MRFs) is an NP-hard problem, and thus research has focussed on either finding efficiently solvable subclasses (e.g. trees), or approximate algorithms (e.g. Loopy Belief Propagation (BP) and Tree-reweighted (TRW) methods). This paper prese ..."
Abstract - Cited by 17 (1 self) - Add to MetaCart
Maximum a posteriori (MAP) inference in Markov Random Fields (MRFs) is an NP-hard problem, and thus research has focussed on either finding efficiently solvable subclasses (e.g. trees), or approximate algorithms (e.g. Loopy Belief Propagation (BP) and Tree-reweighted (TRW) methods). This paper

Discrete MRF Inference of Marginal Densities for Non-uniformly Discretized Variable Space

by Masaki Saito, Takayuki Okatani, Koichiro Deguchi
"... This paper is concerned with the inference of marginal densities based on MRF models. The optimization algo-rithms for continuous variables are only applicable to a lim-ited number of problems, whereas those for discrete vari-ables are versatile. Thus, it is quite common to convert the continuous va ..."
Abstract - Add to MetaCart
This paper is concerned with the inference of marginal densities based on MRF models. The optimization algo-rithms for continuous variables are only applicable to a lim-ited number of problems, whereas those for discrete vari-ables are versatile. Thus, it is quite common to convert the continuous

MRF Inference by k-Fan Decomposition and Tight Lagrangian Relaxation

by Stefan Schmidt, et al.
"... We present a novel dual decomposition approach to MAP inference with highly connected discrete graphical models. Decompositions into cyclic k-fan structured subproblems are shown to significantly tighten the Lagrangian relaxation relative to the standard local polytope relaxation, while enabling ef ..."
Abstract - Add to MetaCart
We present a novel dual decomposition approach to MAP inference with highly connected discrete graphical models. Decompositions into cyclic k-fan structured subproblems are shown to significantly tighten the Lagrangian relaxation relative to the standard local polytope relaxation, while enabling

Supplementary Material: Efficient and Exact MAP-MRF Inference using Branch and Bound

by Min Sun, Murali Telaprolu, Honglak Lee, Silvio Savarese
"... By default, the edge-consistent LPR is solved using Message Passing (MP) algorithm to initialize β until convergence 1 or for at most 1000 iterations, whichever comes first. If the gap between the upper and lower bounds is not smaller than 10 −4 (stopping criteria) already, we further apply our BB m ..."
Abstract - Add to MetaCart
By default, the edge-consistent LPR is solved using Message Passing (MP) algorithm to initialize β until convergence 1 or for at most 1000 iterations, whichever comes first. If the gap between the upper and lower bounds is not smaller than 10 −4 (stopping criteria) already, we further apply our BB method or MPLP-CP method [4]. Both methods stop when the same stopping criteria (gap < 10 −4) is reached. For MPLP-CP method [4], by default, we alternate between adding 20 clusters at a time and running MPLP for 100 more iterations. In the human pose estimation experiment, since the problems can be solved most of the time without cluster pursuit, we allow the MP algorithm to try harder to solve the edge-consistent LPR. We follow the suggestions

Technical Report: Efficient and Exact MAP-MRF Inference using Branch and Bound

by Min Sun, Murali Telaprolu, Honglak Lee, Silvio Savarese
"... By default, the edge-consistent LPR is solved using Message Passing (MP) algorithm to initialize β until convergence 1 or for at most 1000 iterations, whichever comes first. If the gap between the upper and lower bounds is not smaller than 10 −4 (stopping criteria) already, we further apply our BB m ..."
Abstract - Add to MetaCart
By default, the edge-consistent LPR is solved using Message Passing (MP) algorithm to initialize β until convergence 1 or for at most 1000 iterations, whichever comes first. If the gap between the upper and lower bounds is not smaller than 10 −4 (stopping criteria) already, we further apply our BB method or MPLP-CP method [3]. Both methods stop when the same stopping criteria (gap < 10 −4) is reached. For MPLP-CP method [3], by default, we alternate between adding 20 clusters at a time and running MPLP for 100 more iterations. In the human pose estimation experiment, since the problems can be solved most of the time without cluster pursuit, we allow the MP algorithm to try harder to solve the edge-consistent LPR. We follow the suggestions

Tighter Relaxations for MAP-MRF Inference: A Local Primal-Dual Gap based Separation Algorithm

by Dhruv Batra, Sebastian Nowozin, Pushmeet Kohli
"... We propose an efficient and adaptive method for MAP-MRF inference that provides increasingly tighter upper and lower bounds on the optimal objective. Similar to Sontag et al. (2008b), our method starts by solving the first-order LOCAL(G) linear programming relaxation. This is followed by an adaptive ..."
Abstract - Cited by 18 (1 self) - Add to MetaCart
We propose an efficient and adaptive method for MAP-MRF inference that provides increasingly tighter upper and lower bounds on the optimal objective. Similar to Sontag et al. (2008b), our method starts by solving the first-order LOCAL(G) linear programming relaxation. This is followed
Next 10 →
Results 1 - 10 of 257
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University