Results 1 - 10
of
13
Submodular Optimization with Submodular Cover and Submodular Knapsack Constraints
, 2013
"... We investigate two new optimization problems — minimizing a submodular function subject to a submodular lower bound constraint (submodular cover) and maximizing a submodular function subject to a submodular upper bound constraint (submodular knapsack). We are motivated by a number of real-world appl ..."
Abstract
-
Cited by 14 (8 self)
- Add to MetaCart
(Show Context)
We investigate two new optimization problems — minimizing a submodular function subject to a submodular lower bound constraint (submodular cover) and maximizing a submodular function subject to a submodular upper bound constraint (submodular knapsack). We are motivated by a number of real-world applications in machine learning including sensor placement and data subset selection, which require maximizing a certain submodular function (like coverage or diversity) while simultaneously minimizing another (like cooperative cost). These problems are often posed as minimizing the difference between submodular functions [9, 25] which is in the worst case inapproximable. We show, however, that by phrasing these problems as constrained optimization, which is more natural for many applications, we achieve a number of bounded approximation guarantees. We also show that both these problems are closely related and an approximation algorithm solving one can be used to obtain an approximation guarantee for the other. We provide hardness results for both problems thus showing that our approximation factors are tight up to log-factors. Finally, we empirically demonstrate the performance and good scalability properties of our algorithms.
Fast Semidifferential-based Submodular Function Optimization
, 2013
"... We present a practical and powerful new framework for both unconstrained and constrained submodular function optimization based on discrete semidifferentials (sub- and super-differentials). The resulting algorithms, which repeatedly compute and then efficiently optimize submodular semigradients, off ..."
Abstract
-
Cited by 14 (3 self)
- Add to MetaCart
(Show Context)
We present a practical and powerful new framework for both unconstrained and constrained submodular function optimization based on discrete semidifferentials (sub- and super-differentials). The resulting algorithms, which repeatedly compute and then efficiently optimize submodular semigradients, offer new and generalize many old methods for submodular optimization. Our approach, moreover, takes steps towards providing a unifying paradigm applicable to both submodular minimization and maximization, problems that historically have been treated quite distinctly. The practicality of our algorithms is important since interest in submodularity, owing to its natural and wide applicability, has recently been in ascendance within machine learning. We analyze theoretical properties of our algorithms for minimization and maximization, and show that many state-of-the-art maximization algorithms are special cases. Lastly, we complement our theoretical analyses with supporting empirical experiments.
Reflection methods for user-friendly submodular optimization
"... Recently, it has become evident that submodularity naturally captures widely oc-curring concepts in machine learning, signal processing and computer vision. Con-sequently, there is need for efficient optimization procedures for submodular func-tions, especially for minimization problems. While gener ..."
Abstract
-
Cited by 10 (4 self)
- Add to MetaCart
(Show Context)
Recently, it has become evident that submodularity naturally captures widely oc-curring concepts in machine learning, signal processing and computer vision. Con-sequently, there is need for efficient optimization procedures for submodular func-tions, especially for minimization problems. While general submodular minimiza-tion is challenging, we propose a new method that exploits existing decomposabil-ity of submodular functions. In contrast to previous approaches, our method is neither approximate, nor impractical, nor does it need any cumbersome parame-ter tuning. Moreover, it is easy to implement and parallelize. A key component of our method is a formulation of the discrete submodular minimization problem as a continuous best approximation problem that is solved through a sequence of reflections, and its solution can be easily thresholded to obtain an optimal discrete solution. This method solves both the continuous and discrete formulations of the problem, and therefore has applications in learning, inference, and reconstruc-tion. In our experiments, we illustrate the benefits of our method on two image segmentation tasks. 1
Learning fourier sparse set functions
- In International Conference on Artificial Intelligence and Statistics (AISTATS
, 2012
"... Abstract Can we learn a sparse graph from observing the value of a few random cuts? This and more general problems can be reduced to the challenge of learning set functions known to have sparse Fourier support contained in some collection P. We prove that if we choose O(k log 4 |P|) sets uniformly ..."
Abstract
-
Cited by 7 (0 self)
- Add to MetaCart
Abstract Can we learn a sparse graph from observing the value of a few random cuts? This and more general problems can be reduced to the challenge of learning set functions known to have sparse Fourier support contained in some collection P. We prove that if we choose O(k log 4 |P|) sets uniformly at random, then with high probability, observing any k-sparse function on those sets is sufficient to recover that function exactly. We further show that other properties, such as symmetry or submodularity imply structure in the Fourier spectrum, which can be exploited to further reduce sample complexity. One interesting special case is that it suffices to observe O(|E| log 4 (|V |)) values of a cut function to recover a graph. We demonstrate the effectiveness of our results on two realworld reconstruction problems: graph sketching and obtaining fast approximate surrogates to expensive submodular objective functions.
Submodular-Bregman and the Lovász-Bregman Divergences with Applications: Extended Version
"... We introduce a class of discrete divergences on sets (equivalently binary vectors) that we call the submodular-Bregman divergences. We consider two kinds of submodular Bregman divergence, defined either from tight modular upper or tight modular lower bounds of a submodular function. We show that the ..."
Abstract
-
Cited by 6 (4 self)
- Add to MetaCart
(Show Context)
We introduce a class of discrete divergences on sets (equivalently binary vectors) that we call the submodular-Bregman divergences. We consider two kinds of submodular Bregman divergence, defined either from tight modular upper or tight modular lower bounds of a submodular function. We show that the properties of these divergences are analogous to the (standard continuous) Bregman divergence. We demonstrate how the submodular Bregman divergences generalize many useful divergences, including the weighted Hamming distance, squared weighted Hamming, weighted precision, recall, conditional mutual information, and a generalized KL-divergence on sets. We also show that the generalized Bregman divergence on the Lovász extension of a submodular function, which we call the Lovász-Bregman divergence, is a continuous extension of a submodular Bregman divergence. We point out a number of applications of the submodular Bregman and the Lovász Bregman divergences, and in particular show that a proximal algorithm defined through the submodular Bregman divergence provides a framework for many mirror-descent style algorithms related to submodular function optimization. We also show that a generalization of the k-means algorithm using the Lovász Bregman divergence is natural in clustering scenarios where ordering is important. A unique property of this algorithm is that computing the mean ordering is extremely efficient unlike other order based distance measures. Finally we provide a clustering framework for the submodular Bregman, and we derive fast algorithms for clustering sets of binary vectors (equivalently sets of sets). 1
Provable submodular minimization using Wolfe’s algorithm
- In NIPS
, 2014
"... Owing to several applications in large scale learning and vision problems, fast submodular function minimization (SFM) has become a critical problem. Theoretically, unconstrained SFM can be performed in polynomial time [10, 11]. However, these algorithms are typically not practical. In 1976, Wolfe [ ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
(Show Context)
Owing to several applications in large scale learning and vision problems, fast submodular function minimization (SFM) has become a critical problem. Theoretically, unconstrained SFM can be performed in polynomial time [10, 11]. However, these algorithms are typically not practical. In 1976, Wolfe [21] proposed an algorithm to find the minimum Euclidean norm point in a polytope, and in 1980, Fujishige [3] showed how Wolfe’s algorithm can be used for SFM. For general submodular functions, this Fujishige-Wolfe minimum norm algorithm seems to have the best empirical performance. Despite its good practical performance, very little is known about Wolfe’s minimum norm algorithm theoretically. To our knowledge, the only result is an exponential time analysis due to Wolfe [21] himself. In this paper we give a maiden convergence analysis of Wolfe’s algorithm. We prove that in t iterations, Wolfe’s algorithm returns an O(1/t)-approximate solution to the min-norm point on any polytope. We also prove a robust version of Fujishige’s theorem which shows that anO(1/n2)-approximate solution to the min-norm point on the base polytope implies exact submodular minimization. As a corollary, we get the first pseudo-polynomial time guarantee for the Fujishige-Wolfe minimum norm algorithm for unconstrained submodular function minimization. 1
On approximate non-submodular minimization via treestructured supermodularity.
- In 18th International Conference on Artificial Intelligence and Statistics (AISTATS-2015),
, 2015
"... Abstract We address the problem of minimizing non-submodular functions where the supermodularity is restricted to tree-structured pairwise terms. We are motivated by several real world applications, which require submodularity along with structured supermodularity, and this forms a rich class of ex ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Abstract We address the problem of minimizing non-submodular functions where the supermodularity is restricted to tree-structured pairwise terms. We are motivated by several real world applications, which require submodularity along with structured supermodularity, and this forms a rich class of expressive models, where the nonsubmodularity is restricted to a tree. While this problem is NP hard (as we show), we develop several practical algorithms to find approximate and near-optimal solutions for this problem, some of which provide lower and others of which provide upper bounds thereby allowing us to compute a tightness gap for any problem. We compare our algorithms on synthetic data, and also demonstrate the advantage of the formulation on the real world application of image segmentation, where we incorporate structured supermodularity into higher-order submodular energy minimization.
Submodular Minimization in the Context of Modern LP and MILP Methods and Solvers
"... Abstract. We consider the application of mixed-integer linear programming (MILP) solvers to the minimization of submodular functions. We evaluate common large-scale linear-programming (LP) techniques (e.g., column generation, row generation, dual stabilization) for solving a LP reformulation of the ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract. We consider the application of mixed-integer linear programming (MILP) solvers to the minimization of submodular functions. We evaluate common large-scale linear-programming (LP) techniques (e.g., column generation, row generation, dual stabilization) for solving a LP reformulation of the submodular minimization (SM) problem. We present heuristics based on the LP framework and a MILP solver. We evaluated the performance of our methods on a test bed of min-cut and matroidintersection problems formulated as SM problems.
Machine learning and convex optimization with Submodular Functions
- WORKSHOP ON COMBINATORIAL OPTIMIZATION- CARGESE, 2013
, 2013
"... ..."
An Algorithmic Theory of Dependent Regularizers Part 1: Submodular Structure
, 2013
"... We present an exploration of the rich theoretical connections between several classes of regularized models, network flows, and recent results in submodular function theory. This work unifies key aspects of these problems under a common theory, leading to novel methods for working with several impor ..."
Abstract
- Add to MetaCart
(Show Context)
We present an exploration of the rich theoretical connections between several classes of regularized models, network flows, and recent results in submodular function theory. This work unifies key aspects of these problems under a common theory, leading to novel methods for working with several important models of interest in statistics, machine learning and computer vision. In Part 1, we review the concepts of network flows and submodular function optimization theory foundational to our results. We then examine the connections between network flows and the minimum-norm algorithm from submodular optimization, extending and improving several current results. This leads to a concise representation of the structure of a large class of pairwise regularized models important in machine learning, statistics and computer vision. In Part 2, we describe the full regularization path of a class of penalized regression problems with dependent variables that includes the graph-guided LASSO and total variation constrained models. This description also motivates a practical algorithm. This allows us to efficiently find the regularization path of the discretized version of TV penalized models. Ultimately, our new algorithms scale up to high-dimensional problems with millions of variables. 1 ar