Results 1 - 10
of
708
Tightening LP Relaxations for MAP using Message Passing
, 2008
"... Linear Programming (LP) relaxations have become powerful tools for finding the most probable (MAP) configuration in graphical models. These relaxations can be solved efficiently using message-passing algorithms such as belief propagation and, when the relaxation is tight, provably find the MAP confi ..."
Abstract
-
Cited by 112 (18 self)
- Add to MetaCart
Linear Programming (LP) relaxations have become powerful tools for finding the most probable (MAP) configuration in graphical models. These relaxations can be solved efficiently using message-passing algorithms such as belief propagation and, when the relaxation is tight, provably find the MAP
Hypergraphic LP relaxations for Steiner trees.
- In Proc. 14th IPCO,
, 2010
"... Abstract We investigate hypergraphic LP relaxations for the Steiner tree problem, primarily the partition LP relaxation introduced by Könemann et al. [Math. Programming, 2009]. Specifically, we are interested in proving upper bounds on the integrality gap of this LP, and studying its relation to ot ..."
Abstract
-
Cited by 7 (3 self)
- Add to MetaCart
Abstract We investigate hypergraphic LP relaxations for the Steiner tree problem, primarily the partition LP relaxation introduced by Könemann et al. [Math. Programming, 2009]. Specifically, we are interested in proving upper bounds on the integrality gap of this LP, and studying its relation
Clusters and Coarse Partitions in LP Relaxations
"... We propose a new class of consistency constraints for Linear Programming (LP) relaxations for finding the most probable (MAP) configuration in graphical models. Usual cluster-based LP relaxations enforce joint consistency of the beliefs of a cluster of variables, with computational cost increasing e ..."
Abstract
-
Cited by 10 (2 self)
- Add to MetaCart
We propose a new class of consistency constraints for Linear Programming (LP) relaxations for finding the most probable (MAP) configuration in graphical models. Usual cluster-based LP relaxations enforce joint consistency of the beliefs of a cluster of variables, with computational cost increasing
An Alternating Direction Method for Dual MAP LP Relaxation
"... Maximum a-posteriori (MAP) estimation is an important task in many applications of probabilistic graphical models. Although finding an exact solution is generally intractable, approximations based on linear programming (LP) relaxation often provide good approximate solutions. In this paper we prese ..."
Abstract
-
Cited by 32 (2 self)
- Add to MetaCart
Maximum a-posteriori (MAP) estimation is an important task in many applications of probabilistic graphical models. Although finding an exact solution is generally intractable, approximations based on linear programming (LP) relaxation often provide good approximate solutions. In this paper we
LP relaxations and pruning for characteristic imsets
, 2012
"... The geometric approach to learning a Bayesian network (BN) structure is based on the idea to represent every BN structure by a certain vector. Suitable such a zero-one vector representative is the characteristic imset, introduced in [20]. This concept allows one to re-formulate the task of finding t ..."
Abstract
- Add to MetaCart
the global maximum of a score over BN structures as an integer linear programming (ILP) problem. In this research report, extensions of characteristic imsets are considered which additionally encode chain graphs without flags equivalent to acyclic directed graphs. The main contribution is the LP relaxation
Approximate Inference in Graphical Models using LP Relaxations
, 2010
"... Graphical models such as Markov random fields have been successfully applied to a wide variety of fields, from computer vision and natural language processing, to computational biology. Exact probabilistic inference is generally intractable in complex models having many dependencies between the vari ..."
Abstract
-
Cited by 27 (1 self)
- Add to MetaCart
the variables. We present new approaches to approximate inference based on linear programming (LP) relaxations. Our algorithms optimize over the cycle relaxation of the marginal polytope, which we show to be closely related to the first lifting of the Sherali-Adams hierarchy, and is significantly tighter than
On LP Relaxations for the Pattern Minimization Problem
"... Abstract. We discuss two formulations of the Pattern Minimization Problem: (1) introduced by Vanderbeck, and (2) obtained adding setup variables to the cutting stock formulation by Gilmore-Gomory. Let zLP i (u) be the bound given by the linear relaxation of (i) under a given vector u = (uk) of param ..."
Abstract
- Add to MetaCart
Abstract. We discuss two formulations of the Pattern Minimization Problem: (1) introduced by Vanderbeck, and (2) obtained adding setup variables to the cutting stock formulation by Gilmore-Gomory. Let zLP i (u) be the bound given by the linear relaxation of (i) under a given vector u = (uk
. (1) Dual Decomposition and LP Relaxation
"... We consider a linear programming relaxation of the MAP-inference problem. Its dual can be treated as an unconstrained, concave but non-smooth one. We utilize smoothing and coordinate descent algorithm (smoothed TRW-S) to deal with the smoothed problem. We propose a diminishing smoothing scheme to ad ..."
Abstract
- Add to MetaCart
We consider a linear programming relaxation of the MAP-inference problem. Its dual can be treated as an unconstrained, concave but non-smooth one. We utilize smoothing and coordinate descent algorithm (smoothed TRW-S) to deal with the smoothed problem. We propose a diminishing smoothing scheme
Fixing Max-Product: Convergent Message Passing Algorithms for MAP LP-Relaxations
"... We present a novel message passing algorithm for approximating the MAP problem in graphical models. The algorithm is similar in structure to max-product but unlike max-product it always converges, and can be proven to find the exact MAP solution in various settings. The algorithm is derived via bloc ..."
Abstract
-
Cited by 160 (14 self)
- Add to MetaCart
block coordinate descent in a dual of the LP relaxation of MAP, but does not require any tunable parameters such as step size or tree weights. We also describe a generalization of the method to cluster based potentials. The new method is tested on synthetic and real-world problems, and compares
Results 1 - 10
of
708