Results 1 
8 of
8
Generalized roof duality and bisubmodular functions
, 2010
"... Consider a convex relaxation ˆ f of a pseudoboolean function f. We say that the relaxation is totally halfintegral if ˆ f(x) is a polyhedral function with halfintegral extreme points x, and this property is preserved after adding an arbitrary combination of constraints of the form xi = xj, xi = 1 ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
(Show Context)
Consider a convex relaxation ˆ f of a pseudoboolean function f. We say that the relaxation is totally halfintegral if ˆ f(x) is a polyhedral function with halfintegral extreme points x, and this property is preserved after adding an arbitrary combination of constraints of the form xi = xj, xi = 1 − xj, and xi = γ where γ ∈ {0, 1, 1 2} is a constant. A wellknown example is the roof duality relaxation for quadratic pseudoboolean functions f. We argue that total halfintegrality is a natural requirement for generalizations of roof duality to arbitrary pseudoboolean functions. Our contributions are as follows. First, we provide a complete characterization of totally halfintegral relaxations ˆ f by establishing a onetoone correspondence with bisubmodular functions. Second, we give a new characterization of bisubmodular functions. Finally, we show some relationships between general totally halfintegral relaxations and relaxations based on the roof duality.
Reflection methods for userfriendly submodular optimization
"... Recently, it has become evident that submodularity naturally captures widely occurring concepts in machine learning, signal processing and computer vision. Consequently, there is need for efficient optimization procedures for submodular functions, especially for minimization problems. While gener ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
(Show Context)
Recently, it has become evident that submodularity naturally captures widely occurring concepts in machine learning, signal processing and computer vision. Consequently, there is need for efficient optimization procedures for submodular functions, especially for minimization problems. While general submodular minimization is challenging, we propose a new method that exploits existing decomposability of submodular functions. In contrast to previous approaches, our method is neither approximate, nor impractical, nor does it need any cumbersome parameter tuning. Moreover, it is easy to implement and parallelize. A key component of our method is a formulation of the discrete submodular minimization problem as a continuous best approximation problem that is solved through a sequence of reflections, and its solution can be easily thresholded to obtain an optimal discrete solution. This method solves both the continuous and discrete formulations of the problem, and therefore has applications in learning, inference, and reconstruction. In our experiments, we illustrate the benefits of our method on two image segmentation tasks. 1
A PrimalDual Algorithm for HigherOrder Multilabel Markov Random Fields
"... Graph cuts method such as αexpansion [4] and fusion moves [22] have been successful at solving many optimization problems in computer vision. Higherorder Markov Random Fields (MRF’s), which are important for numerous applications, have proven to be very difficult, especially for multilabel MRF’s ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Graph cuts method such as αexpansion [4] and fusion moves [22] have been successful at solving many optimization problems in computer vision. Higherorder Markov Random Fields (MRF’s), which are important for numerous applications, have proven to be very difficult, especially for multilabel MRF’s (i.e. more than 2 labels). In this paper we propose a new primaldual energy minimization method for arbitrary higherorder multilabel MRF’s. Primaldual methods provide guaranteed approximation bounds, and can exploit information in the dual variables to improve their efficiency. Our algorithm generalizes the PD3 [19] technique for firstorder MRFs, and relies on a variant of maxflow that can exactly optimize certain higherorder binary MRF’s [14]. We provide approximation bounds similar to PD3 [19], and the method is fast in practice. It can optimize nonsubmodular MRF’s, and additionally can incorporate problemspecific knowledge in the form of fusion proposals. We compare experimentally against the existing approaches that can efficiently handle these difficult energy functions [6, 10, 11]. For higherorder denoising and stereo MRF’s, we produce lower energy while running significantly faster. 1. Higherorder MRFs There is widespread interest in higherorder MRF’s for problems like denoising [23]and stereo [30], yet the resulting energy functions have proven to be very difficult to minimize. The optimization problem for a higherorder MRF is defined over a hypergraph with vertices V and cliques C plus a label set L. We minimize the cost of the labeling f: LV  → < defined by f(x) =
Structured learning of sumofsubmodular higher order energy functions
, 1309
"... Submodular functions can be exactly minimized in polynomial time, and the special case that graph cuts solve with max flow [18] has had significant impact in computer vision [5, 20, 27]. In this paper we address the important class of sumofsubmodular (SoS) functions [2, 17], which can be efficient ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Submodular functions can be exactly minimized in polynomial time, and the special case that graph cuts solve with max flow [18] has had significant impact in computer vision [5, 20, 27]. In this paper we address the important class of sumofsubmodular (SoS) functions [2, 17], which can be efficiently minimized via a variant of max flow called submodular flow [6]. SoS functions can naturally express higher order priors involving, e.g., local image patches; however, it is difficult to fully exploit their expressive power because they have so many parameters. Rather than trying to formulate existing higher order priors as an SoS function, we take a discriminative learning approach, effectively searching the space of SoS functions for a higher order prior that performs well on our training set. We adopt a structural SVM approach [14, 33] and formulate the training problem in terms of quadratic programming; as a result we can efficiently search the space of SoS priors via an extended cuttingplane algorithm. We also show how the stateoftheart max flow method for vision problems [10] can be modified to efficiently solve the submodular flow problem. Experimental comparisons are made against the OpenCV implementation of the GrabCut interactive segmentation technique [27], which uses handtuned parameters instead of machine learning. On a standard dataset [11] our method learns higher order priors with hundreds of parameter values, and produces significantly better segmentations. While our focus is on binary labeling problems, we show that our techniques can be naturally generalized to handle more than two labels. 1.
Machine learning and convex optimization with Submodular Functions
 WORKSHOP ON COMBINATORIAL OPTIMIZATION CARGESE, 2013
, 2013
"... ..."
Parsimonious Labeling
"... We propose a new family of discrete energy minimization problems, which we call parsimonious labeling. Our energy function consists of unary potentials and highorder clique potentials. While the unary potentials are arbitrary, the clique potentials are proportional to the diversity of the set of u ..."
Abstract
 Add to MetaCart
(Show Context)
We propose a new family of discrete energy minimization problems, which we call parsimonious labeling. Our energy function consists of unary potentials and highorder clique potentials. While the unary potentials are arbitrary, the clique potentials are proportional to the diversity of the set of unique labels assigned to the clique. Intuitively, our energy function encourages the labeling to be parsimonious, that is, use as few labels as possible. This in turn allows us to capture useful cues for important computer vision applications such as stereo correspondence and image denoising. Furthermore, we propose an efficient graphcuts based algorithm for the parsimonious labeling problem that provides strong theoretical guarantees on the quality of the solution. Our algorithm consists of three steps. First, we approximate a given diversity using a mixture of a novel hierarchical Pn Potts model. Second, we use a divideandconquer approach for each mixture component, where each subproblem is solved using an efficient αexpansion algorithm. This provides us with a small number of putative labelings, one for each mixture component. Third, we choose the best putative labeling in terms of the energy value. Using both synthetic and standard real datasets, we show that our algorithm significantly outperforms other graphcuts based approaches. 1.
1Learning Weighted Lower Linear Envelope Potentials in Binary Markov Random Fields
"... Abstract—Markov random fields containing higherorder terms are becoming increasingly popular due to their ability to capture complicated relationships as soft constraints involving many output random variables. In computer vision an important class of constraints encode a preference for label consi ..."
Abstract
 Add to MetaCart
Abstract—Markov random fields containing higherorder terms are becoming increasingly popular due to their ability to capture complicated relationships as soft constraints involving many output random variables. In computer vision an important class of constraints encode a preference for label consistency over large sets of pixels and can be modeled using higherorder terms known as lower linear envelope potentials. In this paper we develop an algorithm for learning the parameters of binary Markov random fields with weighted lower linear envelope potentials. We first show how to perform exact energy minimization on these models in time polynomial in the number of variables and number of linear envelope functions. Then, with tractable inference in hand, we show how the parameters of the lower linear envelope potentials can be estimated from labeled training data within a maxmargin learning framework. We explore three variants of the lower linear envelope parameterization and demonstrate results on both synthetic and realworld problems. Index Terms—higherorder MRFs, lower linear envelope potentials, maxmargin learning 1