Results 1  10
of
11
Submodular function maximization via the multilinear relaxation and contention resolution schemes
 IN ACM SYMPOSIUM ON THEORY OF COMPUTING
, 2011
"... We consider the problem of maximizing a nonnegative submodular set function f: 2 N → R+ over a ground set N subject to a variety of packing type constraints including (multiple) matroid constraints, knapsack constraints, and their intersections. In this paper we develop a general framework that all ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
We consider the problem of maximizing a nonnegative submodular set function f: 2 N → R+ over a ground set N subject to a variety of packing type constraints including (multiple) matroid constraints, knapsack constraints, and their intersections. In this paper we develop a general framework that allows us to derive a number of new results, in particular when f may be a nonmonotone function. Our algorithms are based on (approximately) solving the multilinear extension F of f [5] over a polytope P that represents the constraints, and then effectively rounding the fractional solution. Although this approach has been used quite successfully in some settings [6, 22, 24, 13, 3], it has been limited in some important ways. We overcome these limitations as follows. First, we give constant factor approximation algorithms to maximize
Fast algorithms for maximizing submodular functions
 In SODA
, 2014
"... There has been much progress recently on improved approximations for problems involving submodular objective functions, and many interesting techniques have been developed. However, the resulting algorithms are often slow and impractical. In this paper we develop algorithms that match the best know ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
(Show Context)
There has been much progress recently on improved approximations for problems involving submodular objective functions, and many interesting techniques have been developed. However, the resulting algorithms are often slow and impractical. In this paper we develop algorithms that match the best known approximation guarantees, but with significantly improved running times, for maximizing a monotone submodular function f: 2[n] → R+ subject to various constraints. As in previous work, we measure the number of oracle calls to the objective function which is the dominating term in the running time. Our first result is a simple algorithm that gives a (1 − 1/e − )approximation for a cardinality constraint using O(n log n ) queries, and a 1/(p + 2 ` + 1 + )approximation for the intersection of a psystem and ` knapsack (linear) constraints using O ( n2 log 2 n ) queries. This is the first approximation for a psystem combined with linear constraints. (We also show that the factor of p cannot be improved for maximizing over a psystem.) The main idea behind these algorithms serves as a building block in our more sophisticated algorithms. Our main result is a new variant of the continuous greedy algorithm, which interpolates between the classical greedy algorithm and a truly continuous algorithm. We show how this algorithm can be implemented for matroid and knapsack constraints using Õ(n2) oracle calls to the objective function. (Previous variants and alternative techniques were known to use at least Õ(n4) oracle calls.) This leads to an O(n 2 4 log 2 n )time (1 − 1/e − )approximation for a matroid constraint. For a knapsack constraint, we develop a more involved (1−1/e − )approximation algorithm that runs in time O(n2 ( 1 log n) poly(1/)).
Price of Correlations in Stochastic Optimization
"... When decisions are made in the presence of largescale stochastic data, it is common to pay more attention to the easytosee statistics (e.g., mean) instead of the underlying correlations. One reason is that it is often much easier to solve a stochastic optimization problem by assuming independence ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
When decisions are made in the presence of largescale stochastic data, it is common to pay more attention to the easytosee statistics (e.g., mean) instead of the underlying correlations. One reason is that it is often much easier to solve a stochastic optimization problem by assuming independence across the random data. In this paper, we study the possible loss incurred by ignoring these correlations through a distributionallyrobust stochastic programming model, and propose a new concept called Price of Correlations (POC) to quantify that loss. We show that the POC has a small upper bound for a wide class of cost functions, including uncapacitated facility location, Steiner tree and submodular functions, suggesting that the intuitive approach of assuming independent distribution may actually work well for these stochastic optimization problems. On the other hand, we demonstrate that for some cost functions, POC can be particularly large, e.g., the supermodular functions. We propose alternative ways to solve the corresponding distributionally robust models for these functions. As a byproduct, our analysis yields new results on social welfare maximization and the existence of Walrasian equilibria, which may be of independent interest.
Lower bounds for the ChvátalGomory rank in the 0/1 cube
, 2011
"... We revisit the method of Chvátal, Cook, and Hartmann to establish lower bounds on the ChvátalGomory rank and develop a simpler method. We provide new families of polytopes in the 0/1 cube with high rank and we describe a deterministic family achieving a rank of at least (1 + 1/e)n − 1> n. Fina ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
We revisit the method of Chvátal, Cook, and Hartmann to establish lower bounds on the ChvátalGomory rank and develop a simpler method. We provide new families of polytopes in the 0/1 cube with high rank and we describe a deterministic family achieving a rank of at least (1 + 1/e)n − 1> n. Finally, we show how integrality gaps lead to lower bounds.
A (k + 3)/2approximation algorithm for monotone submodular kset packing and general kexchange systems
, 2012
"... ..."
Algebraic Algorithms for Linear Matroid Parity Problems
"... We present fast and simple algebraic algorithms for the linear matroid parity problem and its applications. For the linear matroid parity problem, we obtain a simple randomized algorithm with running time O(mrω−1) where m and r are the number of columns and the number of rows and ω ≈ 2.376 is the ma ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
We present fast and simple algebraic algorithms for the linear matroid parity problem and its applications. For the linear matroid parity problem, we obtain a simple randomized algorithm with running time O(mrω−1) where m and r are the number of columns and the number of rows and ω ≈ 2.376 is the matrix multiplication exponent. This improves the O(mrω)time algorithm by Gabow and Stallmann, and matches the running time of the algebraic algorithm for linear matroid intersection, answering a question of Harvey. We also present a very simple alternative algorithm with running time O(mr2) which does not need fast matrix multiplication. We further improve the algebraic algorithms for some specific graph problems of interest. For the Mader’s disjoint Spath problem, we present an O(nω)time randomized algorithm where n is the number of vertices. This improves the running time of the existing results considerably, and matches the running time of the algebraic algorithms for graph matching. For the graphic matroid parity problem, we give an O(n4)time randomized algorithm where n is the number of vertices, and an O(n3)time randomized algorithm for a special case useful in designing approximation algorithms. These algorithms are optimal in terms of n as the input size could be Ω(n4) and Ω(n3) respectively. The techniques are based on the algebraic algorithmic framework developed by Mucha and Sankowski, Harvey, and Sankowski. While linear matroid parity and Mader’s disjoint Spath are challenging generalizations for the design of combinatorial algorithms, our results show that both the algebraic algorithms for linear matroid intersection and graph matching can be extended nicely to more general settings. All algorithms are still faster than the existing algorithms even if fast matrix multiplication is not used. These provide simple algorithms that can be easily implemented in practice.
Improved Approximations for kExchange Systems (Extended Abstract)
"... Submodular maximization and set systems play a major role in combinatorial optimization. It is long known that the greedy algorithm provides a 1/(k + 1)approximation for maximizing a monotone submodular function over a ksystem. For the special case of kmatroid intersection, a local search appro ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Submodular maximization and set systems play a major role in combinatorial optimization. It is long known that the greedy algorithm provides a 1/(k + 1)approximation for maximizing a monotone submodular function over a ksystem. For the special case of kmatroid intersection, a local search approach was recently shown to provide an improved approximation of 1/(k + δ) for arbitrary δ> 0. Unfortunately, many fundamental optimization problems are represented by a ksystem which is not a kintersection. An interesting question is whether the local search approach can be extended to include such problems. We answer this question affirmatively. Motivated by the bmatching and kset packing problems, as well as the more general matroid kparity problem, we introduce a new class of set systems called kexchange systems, that includes kset packing, bmatching, matroid kparity in strongly base orderable matroids, and additional combinatorial optimization problems such as: independent set in (k + 1)claw free graphs, asymmetric TSP, job interval selection with identical lengths and frequency allocation on lines. We give a natural local search algorithm which improves upon the current greedy approximation, for this new class of independence systems. Unlike known local search algorithms for similar problems, we use counting arguments to bound the performance of our algorithm. Moreover, we consider additional objective functions and provide improved approximations for them as well. In the case of linear objective functions, we give a nonoblivious local search algorithm, that improves upon existing local search approaches for matroid kparity.
On Laminar Matroids and bMatchings
"... Abstract We prove that three matroid optimisation problems, namely, the matchoid, matroid parity and matroid matching problems, all reduce to the bmatching problem when the matroids concerned are laminar. We then use this equivalence to show that laminar matroid parity polytopes are affinely congr ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract We prove that three matroid optimisation problems, namely, the matchoid, matroid parity and matroid matching problems, all reduce to the bmatching problem when the matroids concerned are laminar. We then use this equivalence to show that laminar matroid parity polytopes are affinely congruent to bmatching polytopes, and have Chvátal rank equal to one. On the other hand, we prove that laminar matroid parity polytopes can have facetdefining inequalities that have lefthand side coefficients greater than 2.
Algebraic Algorithms in Combinatorial Optimization
"... we extend the recent algebraic approach to design fast algorithms for two problems in combinatorial optimization. First we study the linear matroid parity problem, a common generalization of graph matching and linear matroid intersection, that has applications in various areas. We show that Harvey’s ..."
Abstract
 Add to MetaCart
(Show Context)
we extend the recent algebraic approach to design fast algorithms for two problems in combinatorial optimization. First we study the linear matroid parity problem, a common generalization of graph matching and linear matroid intersection, that has applications in various areas. We show that Harvey’s algorithm for linear matroid intersection can be easily generalized to linear matroid parity. This gives an algorithm that is faster and simpler than previous known algorithms. For some graph problems that can be reduced to linear matroid parity, we again show that Harvey’s algorithm for graph matching can be generalized to these problems to give faster algorithms. While linear matroid parity and some of its applications are challenging generalizations, our results show that the algebraic algorithmic framework can be adapted nicely to give faster and simpler algorithms in more general settings. Then we study the all pairs edge connectivity problem for directed graphs, where we would like to compute minimum st cut value between all pairs of vertices. Using a combinatorial approach it is not known how to solve this problem faster