Results 1  10
of
40
Maximizing nonmonotone submodular functions
 IN PROCEEDINGS OF 48TH ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS
, 2007
"... Submodular maximization generalizes many important problems including Max Cut in directed/undirected graphs and hypergraphs, certain constraint satisfaction problems and maximum facility location problems. Unlike the problem of minimizing submodular functions, the problem of maximizing submodular fu ..."
Abstract

Cited by 146 (18 self)
 Add to MetaCart
Submodular maximization generalizes many important problems including Max Cut in directed/undirected graphs and hypergraphs, certain constraint satisfaction problems and maximum facility location problems. Unlike the problem of minimizing submodular functions, the problem of maximizing submodular functions is NPhard. In this paper, we design the first constantfactor approximation algorithms for maximizing nonnegative submodular functions. In particular, we give a deterministic local search 1 2approximation and a randomizedapproximation algo
Maximizing a Monotone Submodular Function subject to a Matroid Constraint
, 2008
"... Let f: 2 X → R+ be a monotone submodular set function, and let (X, I) be a matroid. We consider the problem maxS∈I f(S). It is known that the greedy algorithm yields a 1/2 approximation [14] for this problem. For certain special cases, e.g. max S≤k f(S), the greedy algorithm yields a (1 − 1/e)app ..."
Abstract

Cited by 62 (0 self)
 Add to MetaCart
Let f: 2 X → R+ be a monotone submodular set function, and let (X, I) be a matroid. We consider the problem maxS∈I f(S). It is known that the greedy algorithm yields a 1/2 approximation [14] for this problem. For certain special cases, e.g. max S≤k f(S), the greedy algorithm yields a (1 − 1/e)approximation. It is known that this is optimal both in the value oracle model (where the only access to f is through a black box returning f(S) for a given set S) [28], and also for explicitly posed instances assuming P � = NP [10]. In this paper, we provide a randomized (1 − 1/e)approximation for any monotone submodular function and an arbitrary matroid. The algorithm works in the value oracle model. Our main tools are a variant of the pipage rounding technique of Ageev and Sviridenko [1], and a continuous greedy process that might be of independent interest. As a special case, our algorithm implies an optimal approximation for the Submodular Welfare Problem in the value oracle model [32]. As a second application, we show that the Generalized Assignment Problem (GAP) is also a special case; although the reduction requires X  to be exponential in the original problem size, we are able to achieve a (1 − 1/e − o(1))approximation for GAP, simplifying previously known algorithms. Additionally, the reduction enables us to obtain approximation algorithms for variants of GAP with more general constraints.
Maximizing Submodular Set Functions Subject to Multiple Linear Constraints
, 2009
"... The concept of submodularity plays a vital role in combinatorial optimization. In particular, many important optimization problems can be cast as submodular maximization problems, including maximum coverage, maximum facility location and max cut in directed/undirected graphs. In this paper we presen ..."
Abstract

Cited by 49 (1 self)
 Add to MetaCart
(Show Context)
The concept of submodularity plays a vital role in combinatorial optimization. In particular, many important optimization problems can be cast as submodular maximization problems, including maximum coverage, maximum facility location and max cut in directed/undirected graphs. In this paper we present the first known approximation algorithms for the problem of maximizing a nondecreasing submodular set function subject to multiple linear constraints. Given a ddimensional budget vector ¯ L, for some d ≥ 1, and an oracle for a nondecreasing submodular set function f over a universe U, where each element e ∈ U is associated with a ddimensional cost vector, we seek a subset of elements S ⊆ U whose total cost is at most ¯ L, such that f(S) is maximized. We develop a framework for maximizing submodular functions subject to d linear constraints that yields a (1 − ε)(1 − e−1)approximation to the optimum for any ε> 0, where d> 1 is some constant. Our study is motivated by a variant of the classical maximum coverage problem that we call maximum coverage with multiple packing constraints. We use our framework to obtain the same approximation ratio for this problem. To the best of our knowledge, this is the first time the theoretical bound of 1 − e−1 is (almost) matched for both of these problems.
Submodular Approximation: Samplingbased Algorithms and Lower Bounds
, 2008
"... We introduce several generalizations of classical computer science problems obtained by replacing simpler objective functions with general submodular functions. The new problems include submodular load balancing, which generalizes load balancing or minimummakespan scheduling, submodular sparsest cu ..."
Abstract

Cited by 40 (0 self)
 Add to MetaCart
We introduce several generalizations of classical computer science problems obtained by replacing simpler objective functions with general submodular functions. The new problems include submodular load balancing, which generalizes load balancing or minimummakespan scheduling, submodular sparsest cut and submodular balanced cut, which generalize their respective graph cut problems, as well as submodular function minimization with a cardinality lower bound. We establish upper and lower bounds for the approximability of these problems with a polynomial number of queries to a functionvalue oracle. The approximation guarantees for most of our algorithms are of the order of √ n/lnn. We show that this is the inherent difficulty of the problems by proving matching lower bounds. We also give an improved lower bound for the problem of approximately learning a monotone submodular function. In addition, we present an algorithm for approximately learning submodular functions with special structure, whose guarantee is close to the lower bound. Although quite restrictive, the class of functions with this structure includes the ones that are used for lower bounds both by us and in previous work. This demonstrates that if there are significantly stronger lower bounds for this problem, they rely on more general submodular functions.
On kColumn Sparse Packing Programs
, 2009
"... We consider the class of packing integer programs (PIPs) that are column sparse, i.e. there is a specified upper bound k on the number of constraints that each variable appears in. We give an ek+o(k)approximation algorithm for kcolumn sparse PIPs, improving on recent results of k2 · 2k [14] and O(k ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
(Show Context)
We consider the class of packing integer programs (PIPs) that are column sparse, i.e. there is a specified upper bound k on the number of constraints that each variable appears in. We give an ek+o(k)approximation algorithm for kcolumn sparse PIPs, improving on recent results of k2 · 2k [14] and O(k2) [3, 5]. We also show that the integrality gap of our linear programming relaxation is at least 2k − 1; it is known that kcolumn sparse PIPs are Ω(k log k)hard to approximate [8]. We also extend our result (at the loss of a small constant factor) to the more general case of maximizing a submodular objective over kcolumn sparse packing constraints.
Fast Semidifferentialbased Submodular Function Optimization
, 2013
"... We present a practical and powerful new framework for both unconstrained and constrained submodular function optimization based on discrete semidifferentials (sub and superdifferentials). The resulting algorithms, which repeatedly compute and then efficiently optimize submodular semigradients, off ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
(Show Context)
We present a practical and powerful new framework for both unconstrained and constrained submodular function optimization based on discrete semidifferentials (sub and superdifferentials). The resulting algorithms, which repeatedly compute and then efficiently optimize submodular semigradients, offer new and generalize many old methods for submodular optimization. Our approach, moreover, takes steps towards providing a unifying paradigm applicable to both submodular minimization and maximization, problems that historically have been treated quite distinctly. The practicality of our algorithms is important since interest in submodularity, owing to its natural and wide applicability, has recently been in ascendance within machine learning. We analyze theoretical properties of our algorithms for minimization and maximization, and show that many stateoftheart maximization algorithms are special cases. Lastly, we complement our theoretical analyses with supporting empirical experiments.
Fast algorithms for maximizing submodular functions
 In SODA
, 2014
"... There has been much progress recently on improved approximations for problems involving submodular objective functions, and many interesting techniques have been developed. However, the resulting algorithms are often slow and impractical. In this paper we develop algorithms that match the best know ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
(Show Context)
There has been much progress recently on improved approximations for problems involving submodular objective functions, and many interesting techniques have been developed. However, the resulting algorithms are often slow and impractical. In this paper we develop algorithms that match the best known approximation guarantees, but with significantly improved running times, for maximizing a monotone submodular function f: 2[n] → R+ subject to various constraints. As in previous work, we measure the number of oracle calls to the objective function which is the dominating term in the running time. Our first result is a simple algorithm that gives a (1 − 1/e − )approximation for a cardinality constraint using O(n log n ) queries, and a 1/(p + 2 ` + 1 + )approximation for the intersection of a psystem and ` knapsack (linear) constraints using O ( n2 log 2 n ) queries. This is the first approximation for a psystem combined with linear constraints. (We also show that the factor of p cannot be improved for maximizing over a psystem.) The main idea behind these algorithms serves as a building block in our more sophisticated algorithms. Our main result is a new variant of the continuous greedy algorithm, which interpolates between the classical greedy algorithm and a truly continuous algorithm. We show how this algorithm can be implemented for matroid and knapsack constraints using Õ(n2) oracle calls to the objective function. (Previous variants and alternative techniques were known to use at least Õ(n4) oracle calls.) This leads to an O(n 2 4 log 2 n )time (1 − 1/e − )approximation for a matroid constraint. For a knapsack constraint, we develop a more involved (1−1/e − )approximation algorithm that runs in time O(n2 ( 1 log n) poly(1/)).
Algorithms for approximate minimization of the difference between submodular functions, with applications
, 2012
"... We extend the work of Narasimhan and Bilmes [30] for minimizing set functions representable as a difference between submodular functions. Similar to [30], our new algorithms are guaranteed to monotonically reduce the objective function at every step. We empirically and theoretically show that the pe ..."
Abstract

Cited by 12 (11 self)
 Add to MetaCart
(Show Context)
We extend the work of Narasimhan and Bilmes [30] for minimizing set functions representable as a difference between submodular functions. Similar to [30], our new algorithms are guaranteed to monotonically reduce the objective function at every step. We empirically and theoretically show that the periteration cost of our algorithms is much less than [30], and our algorithms can be used to efficiently minimize a difference between submodular functions under various combinatorial constraints, a problem not previously addressed. We provide computational bounds and a hardness result on the multiplicative inapproximability of minimizing the difference between submodular functions. We show, however, that it is possible to give worstcase additive bounds by providing a polynomial time computable lowerbound on the minima. Finally we show how a number of machine learning problems can be modeled as minimizing the difference between submodular functions. We experimentally show the validity of our algorithms by testing them on the problem of feature selection with submodular cost features.
A Tight Combinatorial Algorithm for Submodular Maximization Subject to a Matroid Constraint
, 2012
"... We present an optimal, combinatorial 11/e approximation algorithm for monotone submodular optimization over a matroid constraint. Compared to the continuous greedy algorithm (Calinescu, Chekuri, Pal and Vondrak, 2008), our algorithm is extremely simple and requires no rounding. It consists of the g ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
We present an optimal, combinatorial 11/e approximation algorithm for monotone submodular optimization over a matroid constraint. Compared to the continuous greedy algorithm (Calinescu, Chekuri, Pal and Vondrak, 2008), our algorithm is extremely simple and requires no rounding. It consists of the greedy algorithm followed by local search. Both phases are run not on the actual objective function, but on a related nonoblivious potential function, which is also monotone submodular. In our previous work on maximum coverage (Filmus and Ward, 2011), the potential function gives more weight to elements covered multiple times. We generalize this approach from coverage functions to arbitrary monotone submodular functions. When the objective function is a coverage function, both definitions of the potential function coincide. The parameters used to define the potential function are closely related to Pade approximants of exp(x) evaluated at x = 1. We use this connection to determine the approximation ratio of the algorithm.