Results 1 
7 of
7
On the approximability of budgeted allocations and improved lower bounds for submodular welfare maximization and GAP
 In Proceedings of the 2008 49th Annual IEEE Symposium on Foundations of Computer ScienceVolume 00
, 2008
"... In this paper we consider the following maximum budgeted allocation(MBA) problem: Given a set of m indivisible items and n agents; each agent i willing to pay bij on item j and with a maximum budget of Bi, the goal is to allocate items to agents to maximize revenue. The problem naturally arises as a ..."
Abstract

Cited by 36 (3 self)
 Add to MetaCart
In this paper we consider the following maximum budgeted allocation(MBA) problem: Given a set of m indivisible items and n agents; each agent i willing to pay bij on item j and with a maximum budget of Bi, the goal is to allocate items to agents to maximize revenue. The problem naturally arises as auctioneer revenue maximization in budgetconstrained auctions and as winner determination problem in combinatorial auctions when utilities of agents are budgetedadditive. Our main results are: • We give a 3/4approximation algorithm for MBA improving upon the previous best of ≃ 0.632[AM04, FV06]. Our techniques are based on a natural LP relaxation of MBA and our factor is optimal in the sense that it matches the integrality gap of the LP. • We prove it is NPhard to approximate MBA to any factor better than 15/16, previously only NPhardness was known [SS06, LLN01]. Our result also implies NPhardness of approximating maximum submodular welfare with demand oracle to a factor better than 15/16, improving upon the best known hardness of 275/276[FV06]. • Our hardness techniques can be modified to prove that it is NPhard to approximate the Generalized Assignment Problem (GAP) to any factor better than 10/11. This improves upon the 422/423 hardness of [CK00, CC02]. We use iterative rounding on a natural LP relaxation of MBA to obtain the 3/4approximation. We also give a (3/4 − ɛ)factor algorithm based on the primaldual schema which runs in Õ(nm) time, for any constant ɛ> 0. 1
Playing games with approximation algorithms
 In Proceedings of the 39 th annual ACM Symposium on Theory of Computing
, 2007
"... Abstract. In an online linear optimization problem, on each period t, an online algorithm chooses st ∈ S from a fixed (possibly infinite) set S of feasible decisions. Nature (who may be adversarial) chooses a weight vector wt ∈ R n, and the algorithm incurs cost c(st, wt), where c is a fixed cost fu ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
(Show Context)
Abstract. In an online linear optimization problem, on each period t, an online algorithm chooses st ∈ S from a fixed (possibly infinite) set S of feasible decisions. Nature (who may be adversarial) chooses a weight vector wt ∈ R n, and the algorithm incurs cost c(st, wt), where c is a fixed cost function that is linear in the weight vector. In the fullinformation setting, the vector wt is then revealed to the algorithm, and in the bandit setting, only the cost experienced, c(st, wt), is revealed. The goal of the online algorithm is to perform nearly as well as the best fixed s ∈ S in hindsight. Many repeated decisionmaking problems with weights fit naturally into this framework, such as online shortestpath, online TSP, online clustering, and online weighted set cover. Previously, it was shown how to convert any efficient exact offline optimization algorithm for such a problem into an efficient online algorithm in both the fullinformation and the bandit settings, with average cost nearly as good as that of the best fixed s ∈ S in hindsight. However, in the case where the offline algorithm is an approximation algorithm with ratio α> 1, the previous approach only worked for special types of approximation algorithms. We show how to convert any offline approximation algorithm for a linear optimization problem into a corresponding online approximation algorithm, with a polynomial blowup in runtime. If the offline algorithm has an αapproximation guarantee, then the expected cost of the online algorithm on any sequence is not much larger than α times that of the best s ∈ S, where the best is chosen with the benefit of hindsight. Our main innovation is combining Zinkevich’s algorithm for convex optimization with a geometric transformation that can be applied to any approximation algorithm. Standard techniques generalize the above result to the bandit setting, except that a “Barycentric Spanner ” for the problem is also (provably) necessary as input. Our algorithm can also be viewed as a method for playing large repeated games, where one can only compute approximate bestresponses, rather than bestresponses. 1. Introduction. In the 1950’s
Dynamic resource provisioning in cloud computing: A randomized auction approach
 in Proc. of IEEE INFOCOM
, 2014
"... Abstract—This work studies resource allocation in a cloud market through the auction of Virtual Machine (VM) instances. It generalizes the existing literature by introducing combinatorial auctions of heterogeneous VMs, and models dynamic VM provisioning. Social welfare maximization under dynamic re ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
(Show Context)
Abstract—This work studies resource allocation in a cloud market through the auction of Virtual Machine (VM) instances. It generalizes the existing literature by introducing combinatorial auctions of heterogeneous VMs, and models dynamic VM provisioning. Social welfare maximization under dynamic resource provisioning is proven NPhard, and modeled with a linear integer program. An efficient αapproximation algorithm is designed, with α ∼ 2.72 in typical scenarios. We then employ this algorithm as a building block for designing a randomized combinatorial auction that is computationally efficient, truthful in expectation, and guarantees the same social welfare approximation factor α. A key technique in the design is to utilize a pair of tailored primal and dual LPs for exploiting the underlying packing structure of the social welfare maximization problem, to decompose its fractional solution into a convex combination of integral solutions. Empirical studies driven by Google Cluster traces verify the efficacy of the randomized auction. I.
Approximability of sparse integer programs
 In Proc. 17th ESA
, 2009
"... The main focus of this paper is a pair of new approximation algorithms for sparse integer programs. First, for covering integer programs {min cx: Ax ≥ b,0 ≤ x ≤ d} where A has at most k nonzeroes per row, we give a kapproximation algorithm. (We assume A, b, c, d are nonnegative.) For any k ≥ 2 and ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
The main focus of this paper is a pair of new approximation algorithms for sparse integer programs. First, for covering integer programs {min cx: Ax ≥ b,0 ≤ x ≤ d} where A has at most k nonzeroes per row, we give a kapproximation algorithm. (We assume A, b, c, d are nonnegative.) For any k ≥ 2 and ǫ> 0, if P = NP this ratio cannot be improved to k − 1 − ǫ, and under the unique games conjecture this ratio cannot be improved to k − ǫ. One key idea is to replace individual constraints by others that have better rounding properties but the same nonnegative integral solutions; another critical ingredient is knapsackcover inequalities. Second, for packing integer programs {max cx: Ax ≤ b,0 ≤ x ≤ d} where A has at most k nonzeroes per column, we give a 2 k k 2approximation algorithm. This is the first polynomialtime approximation algorithm for this problem with approximation ratio depending only on k, for any k> 1. Our approach starts from iterated LP relaxation, and then uses probabilistic and greedy methods to recover a feasible solution. Note added after publication: This version includes subsequent developments: a O(k 2) approximation for the latter problem using the iterated rounding framework, and several literature reference updates including a O(k)approximation for the same problem by Bansal et al.
A Truthful Incentive Mechanism for Emergency Demand Response in Colocation Data Centers
"... Abstract—Data centers are key participants in demand response programs, including emergency demand response (EDR), where the grid coordinates large electricity consumers for demand reduction in emergency situations to prevent major economic losses. While existing literature concentrates on ownerop ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—Data centers are key participants in demand response programs, including emergency demand response (EDR), where the grid coordinates large electricity consumers for demand reduction in emergency situations to prevent major economic losses. While existing literature concentrates on owneroperated data centers, this work studies EDR in multitenant colocation data centers where servers are owned and managed by individual tenants. EDR in colocation data centers is significantly more challenging, due to lack of incentives to reduce energy consumption by tenants who control their servers and are typically on fixed power contracts with the colocation operator. Consequently, to achieve demand reduction goals set by the EDR program, the operator has to rely on the highly expensive and/or environmentallyunfriendly onsite energy backup/generation. To reduce cost and environmental impact, an efficient incentive mechanism is therefore in need, motivating tenants ’ voluntary energy reduction in case of EDR. This work proposes a novel incentive mechanism, TruthDR, which leverages a reverse auction to provide monetary remuneration to tenants according to their agreed energy reduction. TruthDR is computationally efficient, truthful, and achieves 2approximation in colocationwide social cost. Tracedriven simulations verify the efficacy of the proposed auction mechanism. I.
for Robust and MaxMin Optimization
, 912
"... The general problem of robust optimization is this: one of several possible scenarios will appear tomorrow, but things are more expensive tomorrow than they are today. What should you anticipatorily buy today, so that the worstcase cost (summed over both days) is minimized? For example, in a set co ..."
Abstract
 Add to MetaCart
The general problem of robust optimization is this: one of several possible scenarios will appear tomorrow, but things are more expensive tomorrow than they are today. What should you anticipatorily buy today, so that the worstcase cost (summed over both days) is minimized? For example, in a set cover instance, if any one of the () n k subsets of the universe that have size k may appear tomorrow, what is a good course of action? Feige et al. [FJMM07], and later, Khandekar et al. [KKMS08], considered this krobust model where the possible outcomes tomorrow are given by all demandsubsets of size k, and gave algorithms for the set cover problem, and the Steiner tree and facility location problems in this model, respectively. In this paper, we give the following simple and intuitive template for krobust problems: having built some anticipatory solution, if there exists a single demand whose augmentation cost is larger than some threshold (which is ≈ Opt/k), augment the anticipatory solution to cover this demand as well, and repeat. In this paper we show that this template gives us approximation algorithms for krobust versions of Steiner tree and set