Results 1 -
3 of
3
A Primal-Dual Algorithm for Higher-Order Multilabel Markov Random Fields
"... Graph cuts method such as α-expansion [4] and fu-sion moves [22] have been successful at solving many optimization problems in computer vision. Higher-order Markov Random Fields (MRF’s), which are important for numerous applications, have proven to be very difficult, es-pecially for multilabel MRF’s ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
Graph cuts method such as α-expansion [4] and fu-sion moves [22] have been successful at solving many optimization problems in computer vision. Higher-order Markov Random Fields (MRF’s), which are important for numerous applications, have proven to be very difficult, es-pecially for multilabel MRF’s (i.e. more than 2 labels). In this paper we propose a new primal-dual energy minimiza-tion method for arbitrary higher-order multilabel MRF’s. Primal-dual methods provide guaranteed approximation bounds, and can exploit information in the dual variables to improve their efficiency. Our algorithm generalizes the PD3 [19] technique for first-order MRFs, and relies on a variant of max-flow that can exactly optimize certain higher-order binary MRF’s [14]. We provide approximation bounds sim-ilar to PD3 [19], and the method is fast in practice. It can optimize non-submodular MRF’s, and additionally can in-corporate problem-specific knowledge in the form of fusion proposals. We compare experimentally against the exist-ing approaches that can efficiently handle these difficult en-ergy functions [6, 10, 11]. For higher-order denoising and stereo MRF’s, we produce lower energy while running sig-nificantly faster. 1. Higher-order MRFs There is widespread interest in higher-order MRF’s for problems like denoising [23]and stereo [30], yet the result-ing energy functions have proven to be very difficult to min-imize. The optimization problem for a higher-order MRF is defined over a hypergraph with vertices V and cliques C plus a label set L. We minimize the cost of the labeling f: L|V | → < defined by f(x) =
Parsimonious Labeling
"... We propose a new family of discrete energy minimiza-tion problems, which we call parsimonious labeling. Our energy function consists of unary potentials and high-order clique potentials. While the unary potentials are arbitrary, the clique potentials are proportional to the diversity of the set of u ..."
Abstract
- Add to MetaCart
(Show Context)
We propose a new family of discrete energy minimiza-tion problems, which we call parsimonious labeling. Our energy function consists of unary potentials and high-order clique potentials. While the unary potentials are arbitrary, the clique potentials are proportional to the diversity of the set of unique labels assigned to the clique. Intuitively, our energy function encourages the labeling to be parsimo-nious, that is, use as few labels as possible. This in turn allows us to capture useful cues for important computer vi-sion applications such as stereo correspondence and image denoising. Furthermore, we propose an efficient graph-cuts based algorithm for the parsimonious labeling problem that provides strong theoretical guarantees on the quality of the solution. Our algorithm consists of three steps. First, we approximate a given diversity using a mixture of a novel hierarchical Pn Potts model. Second, we use a divide-and-conquer approach for each mixture component, where each subproblem is solved using an efficient α-expansion algo-rithm. This provides us with a small number of putative la-belings, one for each mixture component. Third, we choose the best putative labeling in terms of the energy value. Us-ing both synthetic and standard real datasets, we show that our algorithm significantly outperforms other graph-cuts based approaches. 1.
Submodular Point Processes with Applications to Machine Learning
"... Abstract We introduce a class of discrete point processes that we call the Submodular Point Processes (SPPs). These processes are characterized via a submodular (or supermodular) function, and naturally model notions of information, coverage and diversity, as well as cooperation. Unlike Log-submodu ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract We introduce a class of discrete point processes that we call the Submodular Point Processes (SPPs). These processes are characterized via a submodular (or supermodular) function, and naturally model notions of information, coverage and diversity, as well as cooperation. Unlike Log-submodular and Log-supermodular distributions (Log-SPPs) such as determinantal point processes (DPPs), SPPs are themselves submodular (or supermodular). In this paper, we analyze the computational complexity of probabilistic inference in SPPs. We show that computing the partition function for SPPs (and Log-SPPs), requires exponential complexity in the worst case, and also provide algorithms which approximate SPPs up to polynomial factors. Moreover, for several subclasses of interesting submodular functions that occur in applications, we show how we can provide efficient closed form expressions for the partition functions, and thereby marginals and conditional distributions. We also show how SPPs are closed under mixtures, thus enabling maximum likelihood based strategies for learning mixtures of submodular functions. Finally, we argue how SPPs complement existing Log-SPP distributions, and are a natural model for several applications.