Results 1 
6 of
6
Efficient MRF energy minimization via adaptive diminishing smoothing
 In UAI
, 2012
"... We consider the linear programming relaxation of an energy minimization problem for Markov Random Fields. The dual objective of this problem can be treated as a concave and unconstrained, but nonsmooth function. The idea of smoothing the objective prior to optimization was recently proposed in a se ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
(Show Context)
We consider the linear programming relaxation of an energy minimization problem for Markov Random Fields. The dual objective of this problem can be treated as a concave and unconstrained, but nonsmooth function. The idea of smoothing the objective prior to optimization was recently proposed in a series of papers. Some of them suggested the idea to decrease the amount of smoothing (so called temperature) while getting closer to the optimum. However, no theoretical substantiation was provided. We propose an adaptive smoothing diminishing algorithm based on the duality gap between relaxed primal and dual objectives and demonstrate the efficiency of our approach with a smoothed version of Sequential TreeReweighted Message Passing (TRWS) algorithm. The strategy is applicable to other algorithms as well, avoids adhoc tuning of the smoothing during iterations, and provably guarantees convergence to the optimum. 1
Getting feasible variable estimates from infeasible ones: MRF local polytope study
, 2012
"... This paper proposes a method for the construction of approximate feasible primal solutions from infeasible ones for largescale optimization problems possessing certain separability properties. Whereas the infeasible primal estimates can typically be produced from (sub)gradients of the dual funct ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
This paper proposes a method for the construction of approximate feasible primal solutions from infeasible ones for largescale optimization problems possessing certain separability properties. Whereas the infeasible primal estimates can typically be produced from (sub)gradients of the dual function, it is often not easy to project them to the primal feasible set, since the projection itself has a complexity comparable to the complexity of the initial problem. We propose an alternative efficient method to obtain feasibility and show that its properties influencing the convergence to the optimum are similar to the properties of the Euclidean projection. We apply our method to the local polytope relaxation of inference problems for Markov Random Fields and discuss its advantages over existing methods. c©2013 IEEE 1.
Infeasible Ones: MRF Local Polytope Study
"... This is a draft version of the author chapter. The MIT Press ..."
(Show Context)
Lifted TreeReweighted Variational Inference Hung Hai Bui Natural Language Understanding Lab Nuance Communications
"... We analyze variational inference for highly symmetric graphical models such as those arising from firstorder probabilistic models. We first show that for these graphical models, the treereweighted variational objective lends itself to a compact lifted formulation which can be solved much more eff ..."
Abstract
 Add to MetaCart
We analyze variational inference for highly symmetric graphical models such as those arising from firstorder probabilistic models. We first show that for these graphical models, the treereweighted variational objective lends itself to a compact lifted formulation which can be solved much more efficiently than the standard TRW formulation for the ground graphical model. Compared to earlier work on lifted belief propagation, our formulation leads to a convex optimization problem for lifted marginal inference and provides an upper bound on the partition function. We provide two approaches for improving the lifted TRW upper bound. The first is a method for efficiently computing maximum spanning trees in highly symmetric graphs, which can be used to optimize the TRW edge appearance probabilities. The second is a method for tightening the relaxation of the marginal polytope using lifted cycle inequalities and novel exchangeable cluster consistency constraints. 1
A Fast Variational Approach for Learning Markov Random Field Language Models
"... Language modelling is a fundamental building block of natural language processing. However, in practice the size of the vocabulary limits the distributions applicable for this task: specifically, one has to either resort to local optimization methods, such as those used in neural language models, ..."
Abstract
 Add to MetaCart
(Show Context)
Language modelling is a fundamental building block of natural language processing. However, in practice the size of the vocabulary limits the distributions applicable for this task: specifically, one has to either resort to local optimization methods, such as those used in neural language models, or work with heavily constrained distributions. In this work, we take a step towards overcoming these difficulties. We present a method for globallikelihood optimization of a Markov random field language model exploiting longrange contexts in time independent of the corpus size. We take a variational approach to optimizing the likelihood and exploit underlying symmetries to greatly simplify learning. We demonstrate the efficiency of this method both for language modelling and for partofspeech tagging. 1.