Results 1 
9 of
9
Submodular function maximization via the multilinear relaxation and contention resolution schemes
 IN ACM SYMPOSIUM ON THEORY OF COMPUTING
, 2011
"... We consider the problem of maximizing a nonnegative submodular set function f: 2 N → R+ over a ground set N subject to a variety of packing type constraints including (multiple) matroid constraints, knapsack constraints, and their intersections. In this paper we develop a general framework that all ..."
Abstract

Cited by 40 (2 self)
 Add to MetaCart
We consider the problem of maximizing a nonnegative submodular set function f: 2 N → R+ over a ground set N subject to a variety of packing type constraints including (multiple) matroid constraints, knapsack constraints, and their intersections. In this paper we develop a general framework that allows us to derive a number of new results, in particular when f may be a nonmonotone function. Our algorithms are based on (approximately) solving the multilinear extension F of f [5] over a polytope P that represents the constraints, and then effectively rounding the fractional solution. Although this approach has been used quite successfully in some settings [6, 22, 24, 13, 3], it has been limited in some important ways. We overcome these limitations as follows. First, we give constant factor approximation algorithms to maximize
Constrained nonmonotone submodular maximization: offline and secretary algorithms
, 2010
"... Constrained submodular maximization problems have long been studied, with nearoptimal results known under a variety of constraints when the submodular function is monotone. The case of nonmonotone submodular maximization is less understood: the first approximation algorithms even for the unconstrai ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
Constrained submodular maximization problems have long been studied, with nearoptimal results known under a variety of constraints when the submodular function is monotone. The case of nonmonotone submodular maximization is less understood: the first approximation algorithms even for the unconstrainted setting were given by Feige et al. (FOCS ’07). More recently, Lee et al. (STOC ’09, APPROX ’09) show how to approximately maximize nonmonotone submodular functions when the constraints are given by the intersection of
Twostage Robust Network Design with Exponential Scenarios
"... We study twostage robust variants of combinatorial optimization problems like Steiner tree, Steiner forest, and uncapacitated facility location. The robust optimization problems, previously ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
We study twostage robust variants of combinatorial optimization problems like Steiner tree, Steiner forest, and uncapacitated facility location. The robust optimization problems, previously
Robust and MaxMin Optimization under Matroid and Knapsack Uncertainty Sets
, 2011
"... ..."
(Show Context)
Submitted to Math Programming, manuscript No. Improved Approximations for Twostage MinCut and Shortest Path Problems under Uncertainty
"... Abstract In this paper, we study the robust and stochastic versions of the twostage mincut and shortest path problems introduced in Dhamdhere et al. [6], and give approximation algorithms with improved approximation factors. Specifically, we give a 2approximation for the robust mincut problem an ..."
Abstract
 Add to MetaCart
Abstract In this paper, we study the robust and stochastic versions of the twostage mincut and shortest path problems introduced in Dhamdhere et al. [6], and give approximation algorithms with improved approximation factors. Specifically, we give a 2approximation for the robust mincut problem and a 4approximation for the stochastic version. For the twostage shortest path problem, we give a 3.39approximation for the robust version and 6.78approximation for the stochastic version. Our results significantly improve the previous best approximation factors for the problems. In particular, we provide the first constantfactor approximation for the stochastic mincut problem. Our algorithms are based on guess and prune strategy that crucially exploits the nature of the robust and stochastic objective. In particular, we guess the worstcase second stage cost and based on the guess, select a subset of costly scenarios for the firststage solution to address. The secondstage solution for any scenario is simply the mincut (or shortest Part of this work was done while D. Golovin was affiliated with Carnegie Mellon University and was sup
Thrifty Algorithms for Multistage Robust Optimization
"... Abstract. We consider a class of multistage robust covering problems, where additional information is revealed about the problem instance in each stage, but the cost of taking actions increases. The dilemma for the decisionmaker is whether to wait for additional information and risk the inflation, ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We consider a class of multistage robust covering problems, where additional information is revealed about the problem instance in each stage, but the cost of taking actions increases. The dilemma for the decisionmaker is whether to wait for additional information and risk the inflation, or to take early actions to hedge against rising costs. We study the “krobust ” uncertainty model: in each stage i =0, 1,...,T, the algorithm is shown some subset of size ki that completely contains the eventual demands to be covered; here k1>k2> ·· ·>kT which ensures increasing information over time. The goal is to minimize the cost incurred in the worstcase possible sequence of revelations. For the multistage krobust set cover problem, we give an O(log m + log n)approximation algorithm, nearly matching the Ω log n + log m log log m hardness of approximation [4] even for T = 2 stages. Moreover, our algorithm has a useful “thrifty ” property: it takes actions on just two stages. We show similar thrifty algorithms for multistage krobust Steiner tree, Steiner forest, andminimumcut. For these problems our approximation guarantees are O(min{T,log n, log λmax}), where λmax is the maximum inflation over all the stages. We conjecture that these problems also admit O(1)approximate thrifty algorithms. 1
Active Learning and Submodular Functions
, 2012
"... Active learning is a machine learning setting where the learning algorithm decides what data is labeled. Submodular functions are a class of set functions for which many optimization problems have efficient exact or approximate algorithms. We examine their connections. • We propose a new class of in ..."
Abstract
 Add to MetaCart
Active learning is a machine learning setting where the learning algorithm decides what data is labeled. Submodular functions are a class of set functions for which many optimization problems have efficient exact or approximate algorithms. We examine their connections. • We propose a new class of interactive submodular optimization problems which connect and generalize submodular optimization and active learning over a finite query set. We derive greedy algorithms with approximately optimal worstcase cost. These analyses apply to exact learning, approximate learning, learning in the presence of adversarial noise, and applications that mix learning and covering. • We consider active learning in a batch, transductive setting where the learning algorithm selects a set of examples to be labeled at once. In this setting we derive new error bounds which use symmetric submodular functions for regularization, and we give algorithms which approximately minimize these bounds. • We consider a repeated active learning setting where the learning algorithm solves a sequence of related learning problems. We propose an approach to this problem based on a new online prediction version of submodular set cover. A common
Improved approximations for robust mincut and shortest path
"... In twostage robust optimization the solution to a problem is built in two stages: In the first stage a partial, not necessarily feasible, solution is exhibited. Then the adversary chooses the “worst ” scenario from a predefined set of scenarios. In the second stage, the firststage solution is exte ..."
Abstract
 Add to MetaCart
(Show Context)
In twostage robust optimization the solution to a problem is built in two stages: In the first stage a partial, not necessarily feasible, solution is exhibited. Then the adversary chooses the “worst ” scenario from a predefined set of scenarios. In the second stage, the firststage solution is extended to become feasible for the chosen scenario. The costs at the second stage are larger than at the first one, and the objective is to minimize the total cost paid in the two stages. We give a 2approximation algorithm for the robust mincut problem and a (γ+2)approximation for the robust shortest path problem, where γ is the approximation ratio for the Steiner tree. This improves the factors 1 + 2 and 2(γ + 2) from [Golovin, Goyal and Ravi. Pay today for a rainy day: Improved approximation algorithms for demandrobust mincut and shortest path problems. STACS 2006]. In addition, our solution for robust shortest path is simpler and more efficient than the earlier ones; this is achieved by a more direct algorithm and analysis, not using some of the standard demandrobust optimization techniques.