Results 1 
6 of
6
Submodular Optimization with Submodular Cover and Submodular Knapsack Constraints
, 2013
"... We investigate two new optimization problems — minimizing a submodular function subject to a submodular lower bound constraint (submodular cover) and maximizing a submodular function subject to a submodular upper bound constraint (submodular knapsack). We are motivated by a number of realworld appl ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
We investigate two new optimization problems — minimizing a submodular function subject to a submodular lower bound constraint (submodular cover) and maximizing a submodular function subject to a submodular upper bound constraint (submodular knapsack). We are motivated by a number of realworld applications in machine learning including sensor placement and data subset selection, which require maximizing a certain submodular function (like coverage or diversity) while simultaneously minimizing another (like cooperative cost). These problems are often posed as minimizing the difference between submodular functions [9, 25] which is in the worst case inapproximable. We show, however, that by phrasing these problems as constrained optimization, which is more natural for many applications, we achieve a number of bounded approximation guarantees. We also show that both these problems are closely related and an approximation algorithm solving one can be used to obtain an approximation guarantee for the other. We provide hardness results for both problems thus showing that our approximation factors are tight up to logfactors. Finally, we empirically demonstrate the performance and good scalability properties of our algorithms.
Fast Semidifferentialbased Submodular Function Optimization
, 2013
"... We present a practical and powerful new framework for both unconstrained and constrained submodular function optimization based on discrete semidifferentials (sub and superdifferentials). The resulting algorithms, which repeatedly compute and then efficiently optimize submodular semigradients, off ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
(Show Context)
We present a practical and powerful new framework for both unconstrained and constrained submodular function optimization based on discrete semidifferentials (sub and superdifferentials). The resulting algorithms, which repeatedly compute and then efficiently optimize submodular semigradients, offer new and generalize many old methods for submodular optimization. Our approach, moreover, takes steps towards providing a unifying paradigm applicable to both submodular minimization and maximization, problems that historically have been treated quite distinctly. The practicality of our algorithms is important since interest in submodularity, owing to its natural and wide applicability, has recently been in ascendance within machine learning. We analyze theoretical properties of our algorithms for minimization and maximization, and show that many stateoftheart maximization algorithms are special cases. Lastly, we complement our theoretical analyses with supporting empirical experiments.
Curvature and Optimal Algorithms for Learning and Minimizing Submodular Functions
 IN NIPS
, 2013
"... We investigate three related and important problems connected to machine learning: approximating a submodular function everywhere, learning a submodular function (in a PAClike setting [28]), and constrained minimization of submodular functions. We show that the complexity of all three problems depe ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
(Show Context)
We investigate three related and important problems connected to machine learning: approximating a submodular function everywhere, learning a submodular function (in a PAClike setting [28]), and constrained minimization of submodular functions. We show that the complexity of all three problems depends on the “curvature” of the submodular function, and provide lower and upper bounds that refine and improve previous results [2, 6, 8, 27]. Our proof techniques are fairly generic. We either use a blackbox transformation of the function (for approximation and learning), or a transformation of algorithms to use an appropriate surrogate function (for minimization). Curiously, curvature has been known to influence approximations for submodular maximization [3, 29], but its effect on minimization, approximation and learning has hitherto been open. We complete this picture, and also support our theoretical claims by empirical results.
The LovászBregman Divergence and connections to rank aggregation, clustering and web ranking
"... We extend the recently introduced theory of Lovász Bregman (LB) divergences [19] in several ways. We show that they represent a distortion between a “score ” and an “ordering”, thus providing a new view of rank aggregation and order based clustering with interesting connections to web ranking. We sh ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
We extend the recently introduced theory of Lovász Bregman (LB) divergences [19] in several ways. We show that they represent a distortion between a “score ” and an “ordering”, thus providing a new view of rank aggregation and order based clustering with interesting connections to web ranking. We show how the LB divergences have a number of properties akin to many permutation based metrics, and in fact have as special cases forms very similar to the Kendallτ metric. We also show how the LB divergences subsume a number of commonly used ranking measures in information retrieval, like the NDCG [22] and AUC [35]. Unlike the traditional permutation based metrics, however, the LB divergence naturally captures a notion of “confidence ” in the orderings, thus providing a new representation to applications involving aggregating scores as opposed to just orderings. We show how a number of recently used web ranking models are forms of Lovász Bregman rank aggregation and also observe that a natural form of Mallow’s model using the LB divergence has been used as conditional ranking models for the “Learning to Rank ” problem. 1
Multivariate Spearman’s rho for aggregating ranks using copulas
, 2014
"... We study the problem of rank aggregation: given a set of ranked lists, we want to form a consensus ranking. Furthermore, we consider the case of extreme lists: i.e., only the rank of the best or worst elements are known. We impute missing ranks by the average value and generalise Spearman’s ρ to ext ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We study the problem of rank aggregation: given a set of ranked lists, we want to form a consensus ranking. Furthermore, we consider the case of extreme lists: i.e., only the rank of the best or worst elements are known. We impute missing ranks by the average value and generalise Spearman’s ρ to extreme ranks. Our main contribution is the derivation of a nonparametric estimator for rank aggregation based on multivariate extensions of Spearman’s ρ, which measures correlation between a set of ranked lists. Multivariate Spearman’s ρ is defined using copulas, and we show that the geometric mean of normalised ranks maximises multivariate correlation. Motivated by this, we propose a weighted geometric mean approach for learning to rank which has a closed form least squares solution. When only the best or worst elements of a ranked list are known, we impute the missing ranks by the average value, allowing us to apply Spearman’s ρ. Finally, we demonstrate good performance on the rank aggregation benchmarks MQ2007 and MQ2008. 1.