Results 1  10
of
265
Algorithmic mechanism design
 Games and Economic Behavior
, 1999
"... We consider algorithmic problems in a distributed setting where the participants cannot be assumed to follow the algorithm but rather their own selfinterest. As such participants, termed agents, are capable of manipulating the algorithm, the algorithm designer should ensure in advance that the agen ..."
Abstract

Cited by 662 (23 self)
 Add to MetaCart
We consider algorithmic problems in a distributed setting where the participants cannot be assumed to follow the algorithm but rather their own selfinterest. As such participants, termed agents, are capable of manipulating the algorithm, the algorithm designer should ensure in advance that the agents ’ interests are best served by behaving correctly. Following notions from the field of mechanism design, we suggest a framework for studying such algorithms. Our main technical contribution concerns the study of a representative task scheduling problem for which the standard mechanism design tools do not suffice. Journal of Economic Literature
Fast Approximation Algorithms for Fractional Packing and Covering Problems
, 1995
"... This paper presents fast algorithms that find approximate solutions for a general class of problems, which we call fractional packing and covering problems. The only previously known algorithms for solving these problems are based on general linear programming techniques. The techniques developed ..."
Abstract

Cited by 260 (13 self)
 Add to MetaCart
(Show Context)
This paper presents fast algorithms that find approximate solutions for a general class of problems, which we call fractional packing and covering problems. The only previously known algorithms for solving these problems are based on general linear programming techniques. The techniques developed in this paper greatly outperform the general methods in many applications, and are extensions of a method previously applied to find approximate solutions to multicommodity flow problems. Our algorithm is a Lagrangean relaxation technique; an important aspect of our results is that we obtain a theoretical analysis of the running time of a Lagrangean relaxationbased algorithm. We give several applications of our algorithms. The new approach yields several orders of magnitude of improvement over the best previously known running times for algorithms for the scheduling of unrelated parallel machines in both the preemptive and the nonpreemptive models, for the job shop problem, for th...
Scheduling to Minimize Average Completion Time: Offline and Online Algorithms
, 1996
"... Timeindexed linear programming formulations have recently received a great deal of attention for their practical effectiveness in solving a number of singlemachine scheduling problems. We show that these formulations are also an important tool in the design of approximation algorithms with good wo ..."
Abstract

Cited by 227 (24 self)
 Add to MetaCart
Timeindexed linear programming formulations have recently received a great deal of attention for their practical effectiveness in solving a number of singlemachine scheduling problems. We show that these formulations are also an important tool in the design of approximation algorithms with good worstcase performance guarantees. We give simple new rounding techniques to convert an optimal fractional solution into a feasible schedule for which we can prove a constantfactor performance guarantee, thereby giving the first theoretical evidence of the strength of these relaxations. Specifically, we consider the problem of minimizing the total weighted job completion time on a single machine subject to precedence constraints, and give a polynomialtime (4 + ffl)approximation algorithm, for any ffl ? 0; the best previously known guarantee for this problem was superlogarithmic. With somewhat larger constants, we also show how to extend this result to the case with release date constraints, ...
Approximation Algorithms for Disjoint Paths Problems
, 1996
"... The construction of disjoint paths in a network is a basic issue in combinatorial optimization: given a network, and specified pairs of nodes in it, we are interested in finding disjoint paths between as many of these pairs as possible. This leads to a variety of classical NPcomplete problems for w ..."
Abstract

Cited by 166 (0 self)
 Add to MetaCart
The construction of disjoint paths in a network is a basic issue in combinatorial optimization: given a network, and specified pairs of nodes in it, we are interested in finding disjoint paths between as many of these pairs as possible. This leads to a variety of classical NPcomplete problems for which very little is known from the point of view of approximation algorithms. It has recently been brought into focus in work on problems such as VLSI layout and routing in highspeed networks; in these settings, the current lack of understanding of the disjoint paths problem is often an obstacle to the design of practical heuristics.
Fairness and Load Balancing in Wireless LANs Using Association Control
"... Recent studies on operational wireless LANs (WLANs) have shown that the traffic load is often unevenly distributed among the access points (APs). Such load imbalance results in unfair bandwidth allocation among users. We argue that the load imbalance and consequent unfair bandwidth allocation can ..."
Abstract

Cited by 157 (3 self)
 Add to MetaCart
(Show Context)
Recent studies on operational wireless LANs (WLANs) have shown that the traffic load is often unevenly distributed among the access points (APs). Such load imbalance results in unfair bandwidth allocation among users. We argue that the load imbalance and consequent unfair bandwidth allocation can be greatly alleviated by intelligently associating users to APs, termed association control, rather than having users associate with the APs of strongest signal strength. In this paper, we present an efficient algorithmic solution to determine the userAP associations for maxmin fair bandwidth allocation. We provide a rigorous formulation of the association control problem, considering bandwidth constraints of both the wireless and backhaul links. We show the strong correlation between fairness and load balancing, which enables us to use load balancing techniques for obtaining optimal maxmin fair bandwidth allocation. As this problem is NPhard, we devise algorithms that achieve constantfactor approximation. In particular, we present a 2approximation algorithm for unweighted users and a 3approximation algorithm for weighted users. In our algorithms, we first compute a fractional association solution, in which users can be associated with multiple APs simultaneously. This solution guarantees the fairest bandwidth allocation in terms of maxmin fairness. Then, by utilizing a rounding method, we obtain the integral solution from the fractional solution. We also consider time fairness and present a polynomialtime algorithm for optimal integral solution. We further extend our schemes for the online case where users may join and leave dynamically. Our simulations demonstrate that the proposed algorithms achieve close to optimal load balancing (i.e., maxmin fairness) and they outperform commonlyused heuristic approaches.
A PTAS for the Multiple Knapsack Problem
, 1993
"... The Multiple Knapsackproblem (MKP) is a natural and well known generalization of the single knapsack problem and is defined as follows. We are given a set of n items and m bins (knapsacks) such that each item i has a profit p(i) and a size s(i), and each bin j has a capacity c(j). The goal is to fin ..."
Abstract

Cited by 113 (2 self)
 Add to MetaCart
The Multiple Knapsackproblem (MKP) is a natural and well known generalization of the single knapsack problem and is defined as follows. We are given a set of n items and m bins (knapsacks) such that each item i has a profit p(i) and a size s(i), and each bin j has a capacity c(j). The goal is to find a subset of items of maximum profit such that they have a feasible packing in the bins. MKP is a special case of the Generalized Assignment problem (GAP) where the profit and the size of an item can vary based on the specific bin that it is assigned to. GAP is APXhard and a 2approximation for it is implicit in the work of Shmoys and Tardos [26], and thus far, this was also the best known approximation for MKP. The main result of this paper is a polynomial time approximation scheme for MKP. Apart from its inherent theoretical interest as a common generalization of the wellstudied knapsack and bin packing problems, it appears to be the strongest special case of GAP that is not APXhard. We substantiate this by showing that slight generalizations of MKP that are very restricted versions of GAP are APXhard. Thus our results help demarcate the boundary at which instances of GAP becomeAPXhard. An interesting and novel aspect of our approach is an approximation preserving reduction from an arbitrary instance of MKP to an instance with O(log n) distinct sizes and profits.
Efficient sparse matrixvector multiplication on CUDA
, 2008
"... The massive parallelism of graphics processing units (GPUs) offers tremendous performance in many highperformance computing applications. While dense linear algebra readily maps to such platforms, harnessing this potential for sparse matrix computations presents additional challenges. Given its rol ..."
Abstract

Cited by 113 (2 self)
 Add to MetaCart
(Show Context)
The massive parallelism of graphics processing units (GPUs) offers tremendous performance in many highperformance computing applications. While dense linear algebra readily maps to such platforms, harnessing this potential for sparse matrix computations presents additional challenges. Given its role in iterative methods for solving sparse linear systems and eigenvalue problems, sparse matrixvector multiplication (SpMV) is of singular importance in sparse linear algebra. In this paper we discuss data structures and algorithms for SpMV that are efficiently implemented on the CUDA platform for the finegrained parallel architecture of the GPU. Given the memorybound nature of SpMV, we emphasize memory bandwidth efficiency and compact storage formats. We consider a broad spectrum of sparse matrices, from those that are wellstructured and regular to highly irregular matrices with large imbalances in the distribution of nonzeros per matrix row. We develop methods to exploit several common forms of matrix structure while offering alternatives which accommodate greater irregularity. On structured, gridbased matrices we achieve performance of 36 GFLOP/s in single precision and 16 GFLOP/s in double precision on a GeForce GTX 280 GPU. For unstructured finiteelement matrices, we observe performance in excess of 15 GFLOP/s and 10 GFLOP/s in single and double precision respectively. These results compare favorably to prior stateoftheart studies of SpMV methods on conventional multicore processors. Our double precision SpMV performance is generally two and a half times that of a Cell BE with 8 SPEs and more than ten times greater than that of a quadcore Intel Clovertown system.
Computing correlated equilibria in MultiPlayer Games
 STOC'05
, 2005
"... We develop a polynomialtime algorithm for finding correlated equilibria (a wellstudied notion of rationality due to Aumann that generalizes the Nash equilibrium) in a broad class of succinctly representable multiplayer games, encompassing essentially all known kinds, including all graphical games, ..."
Abstract

Cited by 96 (6 self)
 Add to MetaCart
We develop a polynomialtime algorithm for finding correlated equilibria (a wellstudied notion of rationality due to Aumann that generalizes the Nash equilibrium) in a broad class of succinctly representable multiplayer games, encompassing essentially all known kinds, including all graphical games, polymatrix games, congestion games, scheduling games, local effect games, as well as several generalizations. Our algorithm is based on a variant of the existence proof due to Hart and Schmeidler [11], and employs linear programming duality, the ellipsoid algorithm, Markov chain steady state computations, as well as applicationspecific methods for computing multivariate expectations.
Improved Approximation Algorithms for Shop Scheduling Problems
, 1994
"... In the job shop scheduling problem we are given m machines and n jobs; a job consists of a sequence of operations, each of which must be processed on a specified machine; the objective is to complete all jobs as quickly as possible. This problem is strongly NPhard even for very restrictive special ..."
Abstract

Cited by 90 (7 self)
 Add to MetaCart
(Show Context)
In the job shop scheduling problem we are given m machines and n jobs; a job consists of a sequence of operations, each of which must be processed on a specified machine; the objective is to complete all jobs as quickly as possible. This problem is strongly NPhard even for very restrictive special cases. We give the first randomized and deterministic polynomialtime algorithms that yield polylogarithmic approximations to the optimal length schedule. Our algorithms also extend to the more general case where a job is given not by a linear ordering of the machines on which it must be processed but by an arbitrary partial order. Comparable bounds can also be obtained when there are m 0 types of machines, a specified number of machines of each type, and each operation must be processed on one of the machines of a specified type, as well as for the problem of scheduling unrelated parallel machines subject to chain precedence constraints. Key Words: scheduling, approximation algorithms AM...
A note on the prize collecting traveling salesman problem
, 1993
"... We study the version of the prize collecting traveling salesman problem, where the objective is to find a tour that visits a subset of vertices such that the length of the tour plus the sum of penalties associated with vertices not in the tour is as small as possible. We present an approximation alg ..."
Abstract

Cited by 87 (5 self)
 Add to MetaCart
We study the version of the prize collecting traveling salesman problem, where the objective is to find a tour that visits a subset of vertices such that the length of the tour plus the sum of penalties associated with vertices not in the tour is as small as possible. We present an approximation algorithm with constant bound. The algorithm is based on Christofides' algorithm for the traveling salesman problem as well as a method to round fractional solutions of a linear programming relaxation to integers, feasible for the original problem.