Results 1 
3 of
3
A PrimalDual Algorithm for HigherOrder Multilabel Markov Random Fields
"... Graph cuts method such as αexpansion [4] and fusion moves [22] have been successful at solving many optimization problems in computer vision. Higherorder Markov Random Fields (MRF’s), which are important for numerous applications, have proven to be very difficult, especially for multilabel MRF’s ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Graph cuts method such as αexpansion [4] and fusion moves [22] have been successful at solving many optimization problems in computer vision. Higherorder Markov Random Fields (MRF’s), which are important for numerous applications, have proven to be very difficult, especially for multilabel MRF’s (i.e. more than 2 labels). In this paper we propose a new primaldual energy minimization method for arbitrary higherorder multilabel MRF’s. Primaldual methods provide guaranteed approximation bounds, and can exploit information in the dual variables to improve their efficiency. Our algorithm generalizes the PD3 [19] technique for firstorder MRFs, and relies on a variant of maxflow that can exactly optimize certain higherorder binary MRF’s [14]. We provide approximation bounds similar to PD3 [19], and the method is fast in practice. It can optimize nonsubmodular MRF’s, and additionally can incorporate problemspecific knowledge in the form of fusion proposals. We compare experimentally against the existing approaches that can efficiently handle these difficult energy functions [6, 10, 11]. For higherorder denoising and stereo MRF’s, we produce lower energy while running significantly faster. 1. Higherorder MRFs There is widespread interest in higherorder MRF’s for problems like denoising [23]and stereo [30], yet the resulting energy functions have proven to be very difficult to minimize. The optimization problem for a higherorder MRF is defined over a hypergraph with vertices V and cliques C plus a label set L. We minimize the cost of the labeling f: LV  → < defined by f(x) =
Parsimonious Labeling
"... We propose a new family of discrete energy minimization problems, which we call parsimonious labeling. Our energy function consists of unary potentials and highorder clique potentials. While the unary potentials are arbitrary, the clique potentials are proportional to the diversity of the set of u ..."
Abstract
 Add to MetaCart
(Show Context)
We propose a new family of discrete energy minimization problems, which we call parsimonious labeling. Our energy function consists of unary potentials and highorder clique potentials. While the unary potentials are arbitrary, the clique potentials are proportional to the diversity of the set of unique labels assigned to the clique. Intuitively, our energy function encourages the labeling to be parsimonious, that is, use as few labels as possible. This in turn allows us to capture useful cues for important computer vision applications such as stereo correspondence and image denoising. Furthermore, we propose an efficient graphcuts based algorithm for the parsimonious labeling problem that provides strong theoretical guarantees on the quality of the solution. Our algorithm consists of three steps. First, we approximate a given diversity using a mixture of a novel hierarchical Pn Potts model. Second, we use a divideandconquer approach for each mixture component, where each subproblem is solved using an efficient αexpansion algorithm. This provides us with a small number of putative labelings, one for each mixture component. Third, we choose the best putative labeling in terms of the energy value. Using both synthetic and standard real datasets, we show that our algorithm significantly outperforms other graphcuts based approaches. 1.
Submodular Point Processes with Applications to Machine Learning
"... Abstract We introduce a class of discrete point processes that we call the Submodular Point Processes (SPPs). These processes are characterized via a submodular (or supermodular) function, and naturally model notions of information, coverage and diversity, as well as cooperation. Unlike Logsubmodu ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract We introduce a class of discrete point processes that we call the Submodular Point Processes (SPPs). These processes are characterized via a submodular (or supermodular) function, and naturally model notions of information, coverage and diversity, as well as cooperation. Unlike Logsubmodular and Logsupermodular distributions (LogSPPs) such as determinantal point processes (DPPs), SPPs are themselves submodular (or supermodular). In this paper, we analyze the computational complexity of probabilistic inference in SPPs. We show that computing the partition function for SPPs (and LogSPPs), requires exponential complexity in the worst case, and also provide algorithms which approximate SPPs up to polynomial factors. Moreover, for several subclasses of interesting submodular functions that occur in applications, we show how we can provide efficient closed form expressions for the partition functions, and thereby marginals and conditional distributions. We also show how SPPs are closed under mixtures, thus enabling maximum likelihood based strategies for learning mixtures of submodular functions. Finally, we argue how SPPs complement existing LogSPP distributions, and are a natural model for several applications.