Results 1  10
of
14
Online submodular welfare maximization: Greedy is optimal
"... We prove that no online algorithm (even randomized, against an oblivious adversary) is better than 1/2competitive for welfare maximization with coverage valuations, unless NP = RP. Since the Greedy algorithm is known to be 1/2competitive for monotone submodular valuations, of which coverage is a sp ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We prove that no online algorithm (even randomized, against an oblivious adversary) is better than 1/2competitive for welfare maximization with coverage valuations, unless NP = RP. Since the Greedy algorithm is known to be 1/2competitive for monotone submodular valuations, of which coverage is a special case, this proves that Greedy provides the optimal competitive ratio. On the other hand, we prove that Greedy in a stochastic setting with i.i.d. items and valuations satisfying diminishing returns is (1 − 1/e)competitive, which is optimal even for coverage valuations, unless NP = RP. For online budgetadditive allocation, we prove that no algorithm can be 0.612competitive with respect to a natural LP which has been used previously for this problem. 1
Barriers to nearoptimal equilibria
 In Proceedings of the 55th Annual IEEE Symposium on Foundations of Computer Science (FOCS
"... Abstract—This paper explains when and how communication and computational lower bounds for algorithms for an optimization problem translate to lower bounds on the worstcase quality of equilibria in games derived from the problem. We give three families of lower bounds on the quality of equilibria, ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Abstract—This paper explains when and how communication and computational lower bounds for algorithms for an optimization problem translate to lower bounds on the worstcase quality of equilibria in games derived from the problem. We give three families of lower bounds on the quality of equilibria, each motivated by a different set of problems: congestion, scheduling, and distributed welfare games; welfaremaximization in combinatorial auctions with “blackbox ” bidder valuations; and welfaremaximization in combinatorial auctions with succinctly described valuations. The most straightforward use of our lower bound framework is to harness an existing computational or communication lower bound to derive a lower bound on the worstcase price of anarchy (POA) in a class of games. This is a new approach to POA lower bounds, which relies on reductions in lieu of explicit constructions. More generally, the POA lower bounds implied by our framework apply to all classes of games that share the same underlying optimization problem, independent of the details of players ’ utility functions. For this reason, our lower bounds are particularly significant for problems of game design — ranging from the design of simple combinatorial auctions to the existence of effective tolls for routing networks — where the goal is to design a game that has only nearoptimal equilibria. For example, our results imply that the simultaneous firstprice auction format is optimal among all “simple combinatorial auctions ” in several settings. Index Terms—price of anarchy; mechanism design; complexity of equilibria I.
How to Sell Hyperedges: The Hypermatching Assignment Problem
, 2013
"... We are given a set of clients with budget constraints and a set of indivisible items. Each client is willing to buy one or more bundles of (at most) k items each (bundles can be seen as hyperedges in a khypergraph). If client i gets a bundle e, she pays bi,e and yields a net profit wi,e. The Hyperm ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We are given a set of clients with budget constraints and a set of indivisible items. Each client is willing to buy one or more bundles of (at most) k items each (bundles can be seen as hyperedges in a khypergraph). If client i gets a bundle e, she pays bi,e and yields a net profit wi,e. The Hypermatching Assignment Problem (HAP) is to assign a set of pairwise disjoint bundles to clients so as to maximize the total profit while respecting the budgets. This problem has various applications in production planning and budgetconstrained auctions and generalizes wellstudied problems in combinatorial optimization: for example the weighted (unweighted) khypergraph matching problem is the special case of HAP with one client having unbounded budget and general (unit) profits; the Generalized Assignment Problem (GAP) is the special case of HAP with k = 1. Let ε> 0 denote an arbitrarily small constant. In this paper we obtain the following main results: • We give a randomized (k + 1 + ) approximation algorithm for HAP, which is based on rounding the 1round Lasserre strengthening of a novel LP. This is one of a few approximation results based on Lasserre hierarchies and our approach might be of independent interest. We remark that for weighted khypergraph matching no LP nor SDP relaxation is known to have integrality gap better than k − 1 + 1/k for general k [Chan and Lau, SODA’10]. • For the relevant special case that one wants to maximize the total revenue (i.e., bi,e = wi,e), we present a local search based (k + O( k))/2 approximation algorithm for k = O(1). This almost matches the best known (k + 1 + )/2 approximation ratio by Berman [SWAT’00] for
BIASED RANDOMKEY GENETIC ALGORITHMS FOR THE WINNER DETERMINATION PROBLEM IN COMBINATORIAL AUCTIONS
"... Abstract. In this paper, we address the problem of picking a subset of bids in a general combinatorial auction so as to maximize the overall profit using the firstprice model. This winner determination problem assumes that a single bidding round is held to determine both the winners and prices to ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. In this paper, we address the problem of picking a subset of bids in a general combinatorial auction so as to maximize the overall profit using the firstprice model. This winner determination problem assumes that a single bidding round is held to determine both the winners and prices to be paid. We introduce six variants of biased randomkey genetic algorithms for this problem. Three of them use a novel initialization technique that makes use of solutions of intermediate linear programming relaxations of an exact mixed integerlinear programming model as initial chromosomes of the population. An experimental evaluation compares the effectiveness of the proposed algorithms with the standard mixed linear integer programming formulation, a specialized exact algorithm, and the bestperforming heuristics proposed for this problem. The proposed algorithms are competitive and offer strong results, mainly for largescale auctions. 1.
Mechanisms and Allocations with Positive Network Externalities
, 2012
"... With the advent of social networks such as Facebook and LinkedIn, and online offers/deals web sites, network externalties raise the possibility of marketing and advertising to users based on influence they derive from their neighbors in such networks. Indeed, a user’s knowledge of which of his neigh ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
With the advent of social networks such as Facebook and LinkedIn, and online offers/deals web sites, network externalties raise the possibility of marketing and advertising to users based on influence they derive from their neighbors in such networks. Indeed, a user’s knowledge of which of his neighbors “liked ” the product, changes his valuation for the product. Much of the work on the mechanism design under network externalities has addressed the setting when there is only one product. We consider a more natural setting when there are multiple competing products, and each node in the network is a unitdemand agent. We first consider the problem of welfare maximization under various different types of externality functions. Specifically we get a O(lognlog(nm)) approximation for concave externality functions, 2 O(d)approximation for convex externality functions that are bounded above by a polynomial of degree d, and we give a O(log 3 n)approximation when the externality function is submodular. Our techniques involve formulating nontrivial linear relaxations in each case, and developing novel rounding schemes that yield bounds vastly superior to those obtainable by directly applying results from combinatorial welfare maximization. We then consider the problem of Nash equilibrium where each node in the network is a player whose strategy space corresponds to selecting an item. We develop tight characterization of the conditions under which a Nash equilibrium exists in this game. Lastly, we consider the question of pricing and revenue optimization
Science and DIMAP,
"... Consider the following problem of serving impatient users: we are given a set of customers we would like to serve. We can serve at most one customer in each time step (getting value vi for serving customer i). At the end of each time step, each asyetunserved customer i leaves the system independen ..."
Abstract
 Add to MetaCart
(Show Context)
Consider the following problem of serving impatient users: we are given a set of customers we would like to serve. We can serve at most one customer in each time step (getting value vi for serving customer i). At the end of each time step, each asyetunserved customer i leaves the system independently with probability qi, never to return. What strategy should we use to serve customers to maximize the expected value collected? The standard model of competitive analysis can be applied to this problem: picking the customer with maximum value gives us half the value obtained by the optimal algorithm, and using a vertex weighted online matching algorithm gives us 1 − 1/e ≈ 0.632 fraction of the optimum. As is usual in competitive analysis, these approximations compare to the best value achievable by an clairvoyant adversary that knows all the coin tosses of the customers. Can we do better? We show an upper bound of ≈ 0.648 if we compare our performance to such an clairvoyant algorithm, suggesting we cannot improve our performance substantially. However, these are pessimistic comparisons to a much stronger adversary: what if we compare ourselves to the optimal strategy for this problem, which does not have an unfair advantage? In this case, we can do much better: in particular, we give an algorithm whose expected value is at least 0.7 of that achievable by the optimal algorithm. This improvement is achieved via a novel rounding algorithm, and a nonlocal analysis.
CS369E: Communication Complexity (for Algorithm Designers) Lecture #7: Lower Bounds in Algorithmic Game Theory∗
, 2015
"... This lecture explains some applications of communication complexity to proving lower bounds in algorithmic game theory (AGT), at the border of computer science and economics. In AGT, the natural description size of an object is often exponential in a parameter of interest, and the goal is to perfor ..."
Abstract
 Add to MetaCart
(Show Context)
This lecture explains some applications of communication complexity to proving lower bounds in algorithmic game theory (AGT), at the border of computer science and economics. In AGT, the natural description size of an object is often exponential in a parameter of interest, and the goal is to perform nontrivial computations in time polynomial in the parameter (i.e., logarithmic in the description size). As we know, communication complexity is a great tool for understanding when nontrivial computations require looking at most of the input. 2 The Welfare Maximization Problem The focus of this lecture is the following optimization problem, which has been studied in AGT more than any other. 1. There are k players. 2. There is a set M of m items. 3. Each player i has a valuation vi: 2 M → R+. The number vi(T) indicates i’s value, or willingness to pay, for the items T ⊆ M. The valuation is the private input of player i — i knows vi but none of the other vj’s. We assume that vi(∅) = 0 and that the valuations are monotone, meaning vi(S) ≤ vi(T) whenever S ⊆ T. To avoid bit complexity issues, we’ll also assume that all of the vi(T)’s are integers with description length polynomial in k and m. ∗ c©2015, Tim Roughgarden.
Acknowledgements:
, 2002
"... The Working Paper Series is intended to report preliminary results of researchinprogress. Comments are welcome. This paper looks at a form of caring labor that has been neglected by students both of care work and of emotional labor in the workplace: luxury service. Drawing on 12 months of ethnogra ..."
Abstract
 Add to MetaCart
(Show Context)
The Working Paper Series is intended to report preliminary results of researchinprogress. Comments are welcome. This paper looks at a form of caring labor that has been neglected by students both of care work and of emotional labor in the workplace: luxury service. Drawing on 12 months of ethnography in two luxury hotels and 50 interviews with participants, I demonstrate that many of the elements that differentiate luxury service from nonluxury service are indicators of care. These include personalization; anticipation, legitimation, and resolution of needs; sincerity and authenticity; and available physical labor, both visibly and invisibly displayed. In contrast to some kinds of marketized care work, such as elder care, in which commodification and bureaucratization have led to the elimination of these intangible dimensions of care, in luxury service, these “extra ” elements are the key to profit and are therefore emphasized by management. My evidence further indicates that the “needs ” that are met in the luxury hotel are also often acquired there, as guests describe a process of learning what they are supposed to want and to do in the hotel. I argue that this process of consumption of care in the luxury environment produces and reinforces a particular sense of self as especially entitled to consume care, which in turn creates class dispositions significant for guests ’ consumption and interpersonal relations beyond the
Algorithmic Game Theory handout 11 and 12
, 2013
"... We discussed the maximum welfare problem with submodular bidders and fractionally subadditive bidders. We presented greedy algorithms and algorithms based on the configuration LP. Some related references are listed below. No need to hand in the following assignment. Homework. 1. Given a universe M o ..."
Abstract
 Add to MetaCart
(Show Context)
We discussed the maximum welfare problem with submodular bidders and fractionally subadditive bidders. We presented greedy algorithms and algorithms based on the configuration LP. Some related references are listed below. No need to hand in the following assignment. Homework. 1. Given a universe M of m items, the indicator vector x of a set S is a vector x ∈ {0, 1} m, in which x(i) = 1 iff i ∈ S. A set function f: {0, 1} m − → R assigns a value to each set. The Lovasz extension f L of f extends its domain to the convex set [0, 1] m (allowing fractional values to the coordinates). Its value is defined as the following expectation: f L (x) = Eλ[f(xλ)], where λ is chosen uniformly in the range [0, 1], and x(i) = 1 if xλ(i) ≥ λ and x(i) = 0 otherwise. Observe that f L is equal to f on integer points. Show that if f is submodular then the corresponding f L is a convex function. Namely, the region f(x) ≤ t is convex. 2. In the maxcoverage problem one is given a universe U of n items, a nonnegative value vi for each item i ∈ U, a collection S of subsets of U, and a parameter k. The goal is to select k subsets from S so that the sum of values of items covered by the union of the selected subsets is maximized. Write an integer program expressing this problem. Relax the integer program to a linear program. Present a randomized rounding procedure for the linear program and prove that its expected approximation ratio is at least 1 − 1/e. Remarks. There is also a greedy algorithm that approximates max coverage with a ratio of 1 − 1/e. Achieving an approximation ratio better than 1 − 1/e for this problem is NPhard.
Demand Queries with Preprocessing ∗
, 2014
"... Given a set of items and a submodular setfunction f that determines the value of every subset of items, a demand query assigns prices to the items, and the desired answer is a set S of items that maximizes the pro t, namely, the value of S minus its price. The use of demand queries is well motivate ..."
Abstract
 Add to MetaCart
(Show Context)
Given a set of items and a submodular setfunction f that determines the value of every subset of items, a demand query assigns prices to the items, and the desired answer is a set S of items that maximizes the pro t, namely, the value of S minus its price. The use of demand queries is well motivated in the context of combinatorial auctions. However, answering a demand query (even approximately) is NPhard. We consider the question of whether exponential time preprocessing of f prior to receiving the demand query can help in later answering demand queries in polynomial time. We design a preprocessing algorithm that leads to approximation ratios that are NPhard to achieve without preprocessing. We also prove that there are limitations to the approximation ratios achievable after preprocessing, unless NP ⊂ P/poly. 1