Results 1  10
of
76
Selfish Routing and the Price of Anarchy
 MATHEMATICAL PROGRAMMING SOCIETY NEWSLETTER
, 2007
"... Selfish routing is a classical mathematical model of how selfinterested users might route traffic through a congested network. The outcome of selfish routing is generally inefficient, in that it fails to optimize natural objective functions. The price of anarchy is a quantitative measure of this in ..."
Abstract

Cited by 255 (11 self)
 Add to MetaCart
(Show Context)
Selfish routing is a classical mathematical model of how selfinterested users might route traffic through a congested network. The outcome of selfish routing is generally inefficient, in that it fails to optimize natural objective functions. The price of anarchy is a quantitative measure of this inefficiency. We survey recent work that analyzes the price of anarchy of selfish routing. We also describe related results on bounding the worstpossible severity of a phenomenon called Braess’s Paradox, and on three techniques for reducing the price of anarchy of selfish routing. This survey concentrates on the contributions of the author’s PhD thesis, but also discusses several more recent results in the area.
The price of routing unsplittable flow
 In Proc. 37th Symp. Theory of Computing (STOC
, 2005
"... The essence of the routing problem in real networks is that the traffic demand from a source to destination must be satisfied by choosing a single path between source and destination. The splittable version of this problem is when demand can be satisfied by many paths, namely a flow from source to d ..."
Abstract

Cited by 140 (3 self)
 Add to MetaCart
(Show Context)
The essence of the routing problem in real networks is that the traffic demand from a source to destination must be satisfied by choosing a single path between source and destination. The splittable version of this problem is when demand can be satisfied by many paths, namely a flow from source to destination. The unsplittable, or discrete version of the problem is more realistic yet is more complex from the algorithmic point of view; in some settings optimizing such unsplittable traffic flow is computationally intractable. In this paper, we assume this more realistic unsplittable model, and investigate the ”price of anarchy”, or deterioration of network performance measured in total traffic latency under the selfish user behavior. We show that for linear edge latency functions the price of anarchy is exactly 2.618 for weighted demand and exactly 2.5 for unweighted demand. These results are easily extended to (weighted or unweighted) atomic ”congestion games”, where paths are replaced by general subsets. We also show that for polynomials of degree d edge latency functions the price of anarchy is dΘ(d). Our results hold also for mixed strategies. Previous results of Roughgarden and Tardos showed that for linear edge latency functions the price of anarchy is exactly 4 3 under the assumption that each user controls only a negligible fraction of the overall traffic (this result also holds for the splittable case). Note that under the assumption of negligible traffic pure and mixed strategies are equivalent and also splittable and unsplittable models are equivalent. 1
Pricing network edges for heterogeneous selfish users
 Proc. of STOC
, 2003
"... We study the negative consequences of selfish behavior in a congested network and economic means of influencing such behavior. We consider the model of selfish routing defined by Wardrop [30] and studied in a computer science context by Roughgarden and Tardos [26]. In this model, the latency experie ..."
Abstract

Cited by 113 (9 self)
 Add to MetaCart
(Show Context)
We study the negative consequences of selfish behavior in a congested network and economic means of influencing such behavior. We consider the model of selfish routing defined by Wardrop [30] and studied in a computer science context by Roughgarden and Tardos [26]. In this model, the latency experienced by network traffic on an edge of the network is a function of the edge congestion, and network users are assumed to selfishly route traffic on minimumlatency paths. The quality of a routing of traffic is measured by the sum of travel times (the total latency). It is well known that the outcome of selfish routing (a Nash equilibrium) does not minimize the total latency and can be improved upon with coordination. An ancient strategy for improving the selfish solution is the principle of marginal cost pricing, which asserts that on each edge of the network, each network user on the edge should pay a tax offsetting the congestion effects caused by its presence. By pricing network edges according to this principle, the inefficiency of selfish routing can always be eradicated. This result, while fundamental, assumes a very strong homogeneity property: all network users are assumed to trade off time and money in an identical way. The guarantee also ignores both the algorithmic
Optimal mechanism design and money burning
 STOC ’08
, 2008
"... Mechanism design is now a standard tool in computer science for aligning the incentives of selfinterested agents with the objectives of a system designer. There is, however, a fundamental disconnect between the traditional application domains of mechanism design (such as auctions) and those arising ..."
Abstract

Cited by 57 (15 self)
 Add to MetaCart
Mechanism design is now a standard tool in computer science for aligning the incentives of selfinterested agents with the objectives of a system designer. There is, however, a fundamental disconnect between the traditional application domains of mechanism design (such as auctions) and those arising in computer science (such as networks): while monetary transfers (i.e., payments) are essential for most of the known positive results in mechanism design, they are undesirable or even technologically infeasible in many computer systems. Classical impossibility results imply that the reach of mechanisms without transfers is severely limited. Computer systems typically do have the ability to reduce service quality—routing systems can drop or delay traffic, scheduling protocols can delay the release of jobs, and computational payment schemes can require computational payments from users (e.g., in spamfighting systems). Service degradation is tantamount to requiring that users burn money, and such “payments ” can be used to influence the preferences of the agents at a cost of degrading the social surplus. We develop a framework for the design and analysis of moneyburning mechanisms to maximize the residual surplus— the total value of the chosen outcome minus the payments required. Our primary contributions are the following. • We define a general template for priorfree optimal mechanism design that explicitly connects Bayesian optimal mechanism design, the dominant paradigm in economics, with worstcase analysis. In particular, we establish a general and principled way to identify appropriate performance benchmarks in priorfree mechanism design. • For general singleparameter agent settings, we char
Coordination mechanisms
 PROCEEDINGS OF THE 31ST INTERNATIONAL COLLOQUIUM ON AUTOMATA, LANGUAGES AND PROGRAMMING, IN: LECTURE NOTES IN COMPUTER SCIENCE
, 2004
"... We introduce the notion of coordination mechanisms to improve the performance in systems with independent selfish and noncolluding agents. The quality of a coordination mechanism is measured by its price of anarchy—the worstcase performance of a Nash equilibrium over the (centrally controlled) soc ..."
Abstract

Cited by 57 (5 self)
 Add to MetaCart
(Show Context)
We introduce the notion of coordination mechanisms to improve the performance in systems with independent selfish and noncolluding agents. The quality of a coordination mechanism is measured by its price of anarchy—the worstcase performance of a Nash equilibrium over the (centrally controlled) social optimum. We give upper and lower bounds for the price of anarchy for selfish task allocation and congestion games.
Coordination Mechanisms for Selfish Scheduling
, 2006
"... In machine scheduling, a set of jobs must be scheduled on a set of machines so as to minimize some global objective function, such as the makespan considered in this paper. In practice, jobs are often controlled by independent, selfishly acting agents, each of which selects a machine for processing ..."
Abstract

Cited by 47 (6 self)
 Add to MetaCart
In machine scheduling, a set of jobs must be scheduled on a set of machines so as to minimize some global objective function, such as the makespan considered in this paper. In practice, jobs are often controlled by independent, selfishly acting agents, each of which selects a machine for processing that minimizes the (expected) completion time of its job. This scenario can be formalized as a game in which the players are job owners, the strategies are machines, and the disutility to each player in a strategy profile is the completion time of its job in the corresponding schedule. The equilibria of these games may result in largerthanoptimal overall makespan. The price of anarchy is the ratio of the worstcase equilibrium makespan to the optimal makespan. In this paper, we design and analyze scheduling policies, or coordination mechanisms, for machines which aim to minimize the price of anarchy (restricted to pure Nash equilibria) of the corresponding game. We study coordination mechanisms for four classes of multiprocessor machine scheduling problems and derive upper and lower bounds for the price of anarchy of these mechanisms. For several of the proposed mechanisms, we are also able to prove that the system converges to a purestrategy Nash equilibrium in a linear number of rounds. Finally, we note that our results are applicable to several practical problems arising in communication networks.
(Almost) optimal coordination mechanisms for unrelated maching scheduling
 IN 18TH ACMSIAM SYMP. ON DISCRETE ALGORITHMS (SODA
, 2008
"... We investigate the influence of different algorithmic choices on the approximation ratio in selfish scheduling. Our goal is to design local policies that minimize the inefficiency of resulting equilibria. In particular, we design optimal coordination mechanisms for unrelated machine scheduling, and ..."
Abstract

Cited by 33 (6 self)
 Add to MetaCart
We investigate the influence of different algorithmic choices on the approximation ratio in selfish scheduling. Our goal is to design local policies that minimize the inefficiency of resulting equilibria. In particular, we design optimal coordination mechanisms for unrelated machine scheduling, and improve the known approximation ratio from Θ(m) to Θ(log m), where m is the number of machines. A local policy for each machine orders the set of jobs assigned to it only based on parameters of those jobs. A strongly local policy only uses the processing time of jobs on the the same machine. We prove that the approximation ratio of any set of strongly local ordering policies in equilibria is at least Ω(m). In particular, it implies that the approximation ratio of a greedy shortestfirst algorithm for machine scheduling is at least Ω(m). This closes the gap between the known lower and upper bounds for this problem, and answers an open question raised by Ibarra and Kim [16], and Davis and Jaffe [10]. We then design a local ordering policy with the approximation ratio of Θ(log m) in equilibria, and prove that this policy is optimal among all local ordering policies. This policy orders the jobs in the nondecreasing order of their inefficiency, i.e, the ratio between the processing time on that machine over the minimum processing time. Finally, we show that best responses of players for the inefficiencybased policy may not converge to a pure Nash equilibrium, and present a Θ(log² m) policy for which we can prove fast convergence of best responses to pure Nash equilibria.
Stackelberg thresholds in network routing games or the value of altruism.
 In Proc. of the 8th ACM conference on Electronic Commerce,
, 2007
"... ABSTRACT Noncooperative network routing games are a natural model of users trying to selfishly route flow through a network in order to minimize their own delays. It is well known that the solution resulting from this selfish routing (called the Nash equilibrium) can have social cost strictly highe ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
(Show Context)
ABSTRACT Noncooperative network routing games are a natural model of users trying to selfishly route flow through a network in order to minimize their own delays. It is well known that the solution resulting from this selfish routing (called the Nash equilibrium) can have social cost strictly higher than the cost of the optimum solution. One way to improve the quality of the resulting solution is to centrally control a fraction of the flow. A natural problem for the network administrator then is to route the centrally controlled flow in such a way that the overall cost of the solution is minimized after the remaining fraction has routed itself selfishly. This problem falls in the class of wellstudied Stackelberg routing games. We consider the scenario where the network administrator wants the final solution to be (strictly) better than the Nash equilibrium. In other words, she wants to control enough flow such that the cost of the resulting solution is strictly less than the cost of the Nash equilibrium. We call the minimum fraction of users that must be centrally routed to improve the quality of the resulting solution the Stackelberg threshold. We give a closed form expression for the Stackelberg threshold for parallel links networks with linear latency functions. The expression is in terms of Nash equilibrium flows and optimum flows. It turns out that the Stackelberg threshold is the minimum of Nash flows on links which have more optimum flow than Nash flow. Using our approach to characterize the Stackelberg thresholds, we are able to give a simpler proof of an earlier result which finds the minimum fraction required to be centrally controlled to induce an optimum solution.
Altruism, selfishness, and spite in traffic routing
 In Proc. 9th Conf. Electronic Commerce (EC
, 2008
"... In this paper, we study the price of anarchy of traffic routing, under the assumption that users are partially altruistic or spiteful. We model such behavior by positing that the “cost ” perceived by a user is a linear combination of the actual latency of the route chosen (selfish component), and th ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
(Show Context)
In this paper, we study the price of anarchy of traffic routing, under the assumption that users are partially altruistic or spiteful. We model such behavior by positing that the “cost ” perceived by a user is a linear combination of the actual latency of the route chosen (selfish component), and the increase in latency the user causes for others (altruistic component). We show that if all users have a coefficient of at least β> 0 for the altruistic component, then the price of anarchy is bounded by 1/β, for all network topologies, arbitrary commodities, and arbitrary semiconvex latency functions. We extend this result to give more precise bounds on the price of anarchy for specific classes of latency functions, even for β < 0 modeling spiteful behavior. In particular, we show that if all latency functions are linear, the price of anarchy is bounded by 4/(3 + 2β − β 2). We next study nonuniform altruism distributions, where different users may have different coefficients β. We prove that all such games, even with infinitely many types of players, have a Nash Equilibrium. We show that if the average of the coefficients for the altruistic components of all users is ¯ β, then the price of anarchy is bounded by 1 / ¯ β, for single commodity parallel link networks, and arbitrary convex latency functions. In particular, this result generalizes, albeit nonconstructively, the Stackelberg routing results of Roughgarden and of Swamy. More generally, we bound the price of anarchy based on the class of allowable latency functions, and as a corollary obtain tighter bounds for Stackelberg routing than a recent result of Swamy.