Results 1 - 10
of
43
Distributed Algorithmic Mechanism Design: Recent Results and Future Directions
, 2002
"... Distributed Algorithmic Mechanism Design (DAMD) combines theoretical computer science’s traditional focus on computational tractability with its more recent interest in incentive compatibility and distributed computing. The Internet’s decentralized nature, in which distributed computation and autono ..."
Abstract
-
Cited by 283 (24 self)
- Add to MetaCart
(Show Context)
Distributed Algorithmic Mechanism Design (DAMD) combines theoretical computer science’s traditional focus on computational tractability with its more recent interest in incentive compatibility and distributed computing. The Internet’s decentralized nature, in which distributed computation and autonomous agents prevail, makes DAMD a very natural approach for many Internet problems. This paper first outlines the basics of DAMD and then reviews previous DAMD results on multicast cost sharing and interdomain routing. The remainder of the paper describes several promising research directions and poses some specific open problems.
A crash course in implementation theory
- SOC CHOICE WELFARE
, 2001
"... This paper is meant to familiarize the audience with some of the fundamental results in the theory of implementation and provide a quick progression to some open questions in the literature. ..."
Abstract
-
Cited by 119 (2 self)
- Add to MetaCart
This paper is meant to familiarize the audience with some of the fundamental results in the theory of implementation and provide a quick progression to some open questions in the literature.
Implementation Theory
- in Kenneth Arrow, Amartya Sen, and Kataro Suzumara, eds., Handbook of Social Choice and Welfare, vol. I
, 2002
"... The implementation problem is the problem of designing a mechanism (game form) such that the equilibrium outcomes satisfy some criterion of social optimality. The early literature assumed that each agent would simply report his ..."
Abstract
-
Cited by 43 (1 self)
- Add to MetaCart
The implementation problem is the problem of designing a mechanism (game form) such that the equilibrium outcomes satisfy some criterion of social optimality. The early literature assumed that each agent would simply report his
Approximately Optimal Mechanism Design via Differential Privacy ∗
, 1004
"... In this paper we study the implementation challenge in an abstract interdependent values model and an arbitrary objective function. We design a mechanism that allows for approximate optimal implementation of insensitive objective functions in ex-post Nash equilibrium. If, furthermore, values are pri ..."
Abstract
-
Cited by 37 (1 self)
- Add to MetaCart
(Show Context)
In this paper we study the implementation challenge in an abstract interdependent values model and an arbitrary objective function. We design a mechanism that allows for approximate optimal implementation of insensitive objective functions in ex-post Nash equilibrium. If, furthermore, values are private then the same mechanism is strategy proof. We cast our results onto two specific models: pricing and facility location. The mechanism we design is optimal up to an additive factor of the order of magnitude of one over the square root of the number of agents and involves no utility transfers. Underlying our mechanism is a lottery between two auxiliary mechanisms — with high probability we actuate a mechanism that reduces players influence on the choice of the social alternative, while choosing the optimal outcome with high probability. This is where the recent notion of differential privacy is employed. With the complementary probability we actuate a mechanism that is typically far from optimal but is incentive compatible. The joint mechanism inherits the desired properties from both. We thank Amos Fiat and Haim Kaplan for discussions at an early stage of this research. We thank Frank McSherry and Kunal Talwar for helping to clarify issues related to the constructions in [22]. Finally, we thank Jason Hartline,
Renegotiation-Proof Implementation and Time Preferences," Working Paper 14-90
, 1990
"... JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JS ..."
Abstract
-
Cited by 34 (2 self)
- Add to MetaCart
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. American Economic Association is collaborating with JSTOR to digitize, preserve and extend access to The
2007) "The Role of Independence in the Green-Lin Diamond-Dybvig Model
- Journal of Economic Theory
"... w o r k i n g ..."
(Show Context)
Mechanisms for making crowds truthful
- Journal of Artificial Intelligence Research
"... Abstract We consider schemes for obtaining truthful reports on a common but hidden signal from large groups of rational, self-interested agents. One example are online feedback mechanisms, where users provide observations about the quality of a product or service so that other users can have an acc ..."
Abstract
-
Cited by 20 (4 self)
- Add to MetaCart
(Show Context)
Abstract We consider schemes for obtaining truthful reports on a common but hidden signal from large groups of rational, self-interested agents. One example are online feedback mechanisms, where users provide observations about the quality of a product or service so that other users can have an accurate idea of what quality they can expect. However, (i) providing such feedback is costly, and (ii) there are many motivations for providing incorrect feedback. Both problems can be addressed by reward schemes which (i) cover the cost of obtaining and reporting feedback, and (ii) maximize the expected reward of a rational agent who reports truthfully. We address the design of such incentive-compatible rewards for feedback generated in environments with pure adverse selection. Here, the correlation between the true knowledge of an agent and her beliefs regarding the likelihoods of reports of other agents can be exploited to make honest reporting a Nash equilibrium. In this paper we extend existing methods for designing incentive-compatible rewards by also considering collusion. We analyze different scenarios, where, for example, some or all of the agents collude. For each scenario we investigate whether a collusion-resistant, incentive-compatible reward scheme exists, and use automated mechanism design to specify an algorithm for deriving an efficient reward mechanism.
Robust Virtual Implementation with Incomplete Information: Towards a Reinterpretation of the Wilson Doctrine
, 2007
"... We consider robust virtual implementation, where robustness is the requirement that implementation succeed in all type spaces coherent with a given payoff type space as well as with a given space of first-order beliefs about the other agents’ payoff types. This last bit, which constitutes our reinte ..."
Abstract
-
Cited by 18 (6 self)
- Add to MetaCart
We consider robust virtual implementation, where robustness is the requirement that implementation succeed in all type spaces coherent with a given payoff type space as well as with a given space of first-order beliefs about the other agents’ payoff types. This last bit, which constitutes our reinterpretation of the Wilson doctrine, allows us to obtain a better understanding of the limits of implementation. Our first result is that, in quasilinear environments where interim preferences of types are diverse, any incentive compatible social choice function is robustly virtually implementable in iteratively undominated strategies. Further, we characterize robust virtual implementation in iteratively undominated strategies by means of incentive compatibility and measurability. Our work also clarifies the measurability condition in connection to the simple diversity of preferences used in our first result.
The full surplus extraction theorem with hidden actions
, 2003
"... Consider a situation in which a principal commits to a mechanism first and then agents choose unobservable actions before their payoff-relevant types are realized. The agents ’ actions may affect not only their payoffs directly but also the distribution of their types as well. This paper extends Cré ..."
Abstract
-
Cited by 17 (4 self)
- Add to MetaCart
(Show Context)
Consider a situation in which a principal commits to a mechanism first and then agents choose unobservable actions before their payoff-relevant types are realized. The agents ’ actions may affect not only their payoffs directly but also the distribution of their types as well. This paper extends Crémer and McLean’s full surplus extraction theorem to such a setting. In this environ-ment, it is shown that a principal may not succeed in extracting full surplus from agents when there are many actions to which the agents can deviate. However, it is also shown that a principal can extract full surplus generically given any approximately efficient (completely) mixed action profile. This is achieved by using a general mechanism where agents announce both their types and their realized actions. Therefore, with hidden actions, there is a big gap between exact full surplus extraction and approximate full surplus extraction.
The Theory of Implementation of Social Choice Rules
- In: SIAM Review
, 2004
"... Abstract. Suppose that the goals of a society can be summarized in a social choice rule, i.e., a mapping from relevant underlying parameters to final outcomes. Typically, the underlying parameters (e.g., individual preferences) are private information to the agents in society. The implementation pro ..."
Abstract
-
Cited by 14 (3 self)
- Add to MetaCart
Abstract. Suppose that the goals of a society can be summarized in a social choice rule, i.e., a mapping from relevant underlying parameters to final outcomes. Typically, the underlying parameters (e.g., individual preferences) are private information to the agents in society. The implementation problem is then formulated: under what circumstances can one design a mechanism so that the private information is truthfully elicited and the social optimum ends up being implemented? In designing such a mechanism, appropriate incentives will have to be given to the agents so that they do not wish to misrepresent their information. The theory of implementation or mechanism design formalizes this “social engineering ” problem and provides answers to the question just posed. I survey the theory of implementation in this article, emphasizing the results based on two behavioral assumptions for the agents (dominant strategies and Nash equilibrium). Examples discussed include voting, and the allocation of private and public goods under complete and incomplete information.