Results 1  10
of
430
Truthful and NearOptimal Mechanism Design via Linear Programming
"... We give a general technique to obtain approximation mechanisms that are truthful in expectation.We show that for packing domains, any ffapproximation algorithm that also bounds the integrality gapof the LP relaxation of the problem by ff can be used to construct an ffapproximation mechanismthat is ..."
Abstract

Cited by 141 (12 self)
 Add to MetaCart
We give a general technique to obtain approximation mechanisms that are truthful in expectation.We show that for packing domains, any ffapproximation algorithm that also bounds the integrality gapof the LP relaxation of the problem by ff can be used to construct an ffapproximation mechanismthat is truthful in expectation. This immediately yields a variety of new and significantly improved results for various problem domains and furthermore, yields truthful (in expectation) mechanisms withguarantees that match the best known approximation guarantees when truthfulness is not required. In particular, we obtain the first truthful mechanisms with approximation guarantees for a variety of multiparameter domains. We obtain truthful (in expectation) mechanisms achieving approximation guarantees of O( p m) for combinatorial auctions (CAs), (1 + ffl) for multiunit CAs with B = \Omega (log m) copies ofeach item, and 2 for multiparameter knapsack problems (multiunit auctions). Our construction is based on considering an LP relaxation of the problem and using the classicVCG [25, 9, 12] mechanism to obtain a truthful mechanism in this fractional domain. We argue that the (fractional) optimal solution scaled down by ff, where ff is the integrality gap of the problem, canbe represented as a convex combination of integer solutions, and by viewing this convex combination as specifying a probability distribution over integer solutions, we get a randomized, truthful in expectationmechanism. Our construction can be seen as a way of exploiting VCG in a computational tractable way even when the underlying socialwelfare maximization problem is NPhard.
A scalable application placement controller for enterprise datacenters
 In WWW 2007
"... Given a set of machines and a set of Web applications with dynamically changing demands, an online application placement controller decides how many instances to run for each application and where to put them, while observing all kinds of resource constraints. This NP hard problem has real usage in ..."
Abstract

Cited by 70 (5 self)
 Add to MetaCart
(Show Context)
Given a set of machines and a set of Web applications with dynamically changing demands, an online application placement controller decides how many instances to run for each application and where to put them, while observing all kinds of resource constraints. This NP hard problem has real usage in commercial middleware products. Existing approximation algorithms for this problem can scale to at most a few hundred machines, and may produce placement solutions that are far from optimal when system resources are tight. In this paper, we propose a new algorithm that can produce within 30 seconds highquality solutions for hard placement problems with thousands of machines and thousands of applications. This scalability is crucial for dynamic resource provisioning in largescale enterprise data centers. Our algorithm allows multiple applications to share a single machine, and strives to maximize the total satisfied application demand, to minimize the number of application starts and stops, and to balance the load across machines. Compared with existing stateoftheart algorithms, for systems with 100 machines or less, our algorithm is up to 134 times faster, reduces application starts and stops by up to 97%, and produces placement solutions that satisfy up to 25% more application demands. Our algorithm has been implemented and adopted in a leading commercial middleware product for managing the performance of Web applications.
Dynamic placement for clustered Web applications
 in Proc. of the 7th International Conference on World Wide Web
, 2006
"... We introduce and evaluate a middleware clustering technology capable of allocating resources to web applications through dynamic application instance placement. We define application instance placement as the problem of placing application instances on a given set of server machines to adjust the am ..."
Abstract

Cited by 64 (4 self)
 Add to MetaCart
(Show Context)
We introduce and evaluate a middleware clustering technology capable of allocating resources to web applications through dynamic application instance placement. We define application instance placement as the problem of placing application instances on a given set of server machines to adjust the amount of resources available to applications in response to varying resource demands of application clusters. The objective is to maximize the amount of demand that may be satisfied using a configured placement. To limit the disturbance to the system caused by starting and stopping application instances, the placement algorithm attempts to minimize the number of placement changes. It also strives to keep resource utilization balanced across all server machines. Two types of resources are managed, one loaddependent and one loadindependent. When putting the chosen placement in effect our controller schedules placement changes in a manner that limits the disruption to the system.
Maximizing Submodular Set Functions Subject to Multiple Linear Constraints
, 2009
"... The concept of submodularity plays a vital role in combinatorial optimization. In particular, many important optimization problems can be cast as submodular maximization problems, including maximum coverage, maximum facility location and max cut in directed/undirected graphs. In this paper we presen ..."
Abstract

Cited by 51 (1 self)
 Add to MetaCart
(Show Context)
The concept of submodularity plays a vital role in combinatorial optimization. In particular, many important optimization problems can be cast as submodular maximization problems, including maximum coverage, maximum facility location and max cut in directed/undirected graphs. In this paper we present the first known approximation algorithms for the problem of maximizing a nondecreasing submodular set function subject to multiple linear constraints. Given a ddimensional budget vector ¯ L, for some d ≥ 1, and an oracle for a nondecreasing submodular set function f over a universe U, where each element e ∈ U is associated with a ddimensional cost vector, we seek a subset of elements S ⊆ U whose total cost is at most ¯ L, such that f(S) is maximized. We develop a framework for maximizing submodular functions subject to d linear constraints that yields a (1 − ε)(1 − e−1)approximation to the optimum for any ε> 0, where d> 1 is some constant. Our study is motivated by a variant of the classical maximum coverage problem that we call maximum coverage with multiple packing constraints. We use our framework to obtain the same approximation ratio for this problem. To the best of our knowledge, this is the first time the theoretical bound of 1 − e−1 is (almost) matched for both of these problems.
Random knapsack in expected polynomial time
 IN PROC. 35TH ANNUAL ACM SYMPOSIUM ON THEORY OF COMPUTING (STOC2003
, 2003
"... We present the first averagecase analysis proving a polynomial upper bound on the expected running time of an exact algorithm for the 0/1 knapsack problem. In particular, we prove for various input distributions, that the number of Paretooptimal knapsack fillings is polynomially bounded in the num ..."
Abstract

Cited by 48 (10 self)
 Add to MetaCart
(Show Context)
We present the first averagecase analysis proving a polynomial upper bound on the expected running time of an exact algorithm for the 0/1 knapsack problem. In particular, we prove for various input distributions, that the number of Paretooptimal knapsack fillings is polynomially bounded in the number of available items. An algorithm by Nemhauser and Ullmann can enumerate these solutions very efficiently so that a polynomial upper bound on the number of Paretooptimal solutions implies an algorithm with expected polynomial running time. The random input model underlying our analysis is quite general and not restricted to a particular input distribution. We assume adversarial weights and randomly drawn profits (or vice versa). Our analysis covers general probability distributions with finite mean and, in its most general form, can even handle different probability distributions for the profits of different items. This feature enables us to study the effects of correlations between profits and weights. Our analysis confirms and explains practical studies showing that socalled strongly correlated instances are harder to solve than weakly correlated ones.
Minimum weight triangulation is NPhard
 IN PROC. 22ND ANNU. ACM SYMPOS. COMPUT. GEOM
, 2006
"... A triangulation of a planar point set S is a maximal plane straightline graph with vertex set S. In the minimum weight triangulation (MWT) problem, we are looking for a triangulation of a given point set that minimizes the sum of the edge lengths. We prove that the decision version of this problem ..."
Abstract

Cited by 43 (0 self)
 Add to MetaCart
A triangulation of a planar point set S is a maximal plane straightline graph with vertex set S. In the minimum weight triangulation (MWT) problem, we are looking for a triangulation of a given point set that minimizes the sum of the edge lengths. We prove that the decision version of this problem is NPhard. We use a reduction from PLANAR1IN3SAT. The correct working of the gadgets is established with computer assistance, using geometric inclusion and exclusion criteria for MWT edges, such as the diamond test and the LMTSkeleton heuristic, as well as dynamic programming on polygonal faces.
An efficient approximation for the generalized assignment problem
 Information Processing Letters
, 2006
"... We present a simple family of algorithms for solving the Generalized Assignment Problem (GAP). Our technique is based on a novel combinatorial translation of any algorithm for the knapsack problem into an approximation algorithm for GAP. If the approximation ratio of the knapsack algorithm is α and ..."
Abstract

Cited by 33 (6 self)
 Add to MetaCart
We present a simple family of algorithms for solving the Generalized Assignment Problem (GAP). Our technique is based on a novel combinatorial translation of any algorithm for the knapsack problem into an approximation algorithm for GAP. If the approximation ratio of the knapsack algorithm is α and its running time is O(f(N)), our algorithm guarantees a (1 + α) approximation ratio, and it runs in O(M · f(N) + M · N), where N is the number of items and M is the number of bins. Not only does our technique comprise a general interesting framework for the GAP problem; it also matches the best combinatorial approximation for this problem, with a much simpler algorithm and a better running time.
Lifetime maximization via cooperative nodes and relay deployment in wireless networks
 IEEE J. Sel. Areas Commun., Specical Issue on Cooperative Communications and Networking
, 2007
"... Abstract — Extending lifetime of batteryoperated devices is a key design issue that allows uninterrupted information exchange among distributed nodes in wireless networks. Cooperative communications has recently emerged as a new communication paradigm that enables and leverages effective resource s ..."
Abstract

Cited by 27 (4 self)
 Add to MetaCart
(Show Context)
Abstract — Extending lifetime of batteryoperated devices is a key design issue that allows uninterrupted information exchange among distributed nodes in wireless networks. Cooperative communications has recently emerged as a new communication paradigm that enables and leverages effective resource sharing among cooperative nodes. In this paper, a general framework for lifetime extension of batteryoperated devices by exploiting cooperative diversity is proposed. The framework efficiently takes advantage of different locations and energy levels among distributed nodes. First, a lifetime maximization problem via cooperative nodes is considered and performance analysis for Mary PSK modulation is provided. With an objective to maximize the minimum device lifetime under a constraint on biterrorrate performance, the optimization problem determines which nodes should cooperate and how much power should be allocated for cooperation. Since the formulated problem is NP hard, a closedform solution for a twonode network is derived to obtain some insights. Based on the twonode solution, a fast suboptimal algorithm is developed for multinode scenarios. Moreover, the device lifetime is further improved by a deployment of cooperative relays in order to help forward information of the distributed nodes in the network. Optimum location and power allocation for each cooperative relay are determined with an aim to maximize the minimum device lifetime. A suboptimal algorithm is developed to solve the problem with multiple cooperative relays and cooperative nodes. Simulation results show that the minimum device lifetime of the network with cooperative nodes improves 2 times longer than the lifetime of the noncooperative network. In addition, deploying a cooperative relay in a proper location leads up to 12 times longer lifetime than that of the noncooperative network. Index Terms — Cooperative diversity, wireless networks, decodeandforward protocol, lifetime maximization.
Complexity of strategic behavior in multiwinner elections
 Journal of Artificial Intelligence Research
, 2008
"... Although recent years have seen a surge of interest in the computational aspects of social choice, no specific attention has previously been devoted to elections with multiple winners, e.g., elections of an assembly or committee. In this paper, we characterize the worstcase complexity of manipulati ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
(Show Context)
Although recent years have seen a surge of interest in the computational aspects of social choice, no specific attention has previously been devoted to elections with multiple winners, e.g., elections of an assembly or committee. In this paper, we characterize the worstcase complexity of manipulation and control in the context of four prominent multiwinner voting systems, under different formulations of the strategic agent’s goal. 1.