Results 1  10
of
23
Approximate Mechanism Design Without Money
, 2009
"... The literature on algorithmic mechanism design is mostly concerned with gametheoretic versions of optimization problems to which standard economic moneybased mechanisms cannot be applied efficiently. Recent years have seen the design of various truthful approximation mechanisms that rely on enforc ..."
Abstract

Cited by 58 (14 self)
 Add to MetaCart
(Show Context)
The literature on algorithmic mechanism design is mostly concerned with gametheoretic versions of optimization problems to which standard economic moneybased mechanisms cannot be applied efficiently. Recent years have seen the design of various truthful approximation mechanisms that rely on enforcing payments. In this paper, we advocate the reconsideration of highly structured optimization problems in the context of mechanism design. We explicitly argue for the first time that, in such domains, approximation can be leveraged to obtain truthfulness without resorting to payments. This stands in contrast to previous work where payments are ubiquitous, and (more often than not) approximation is a necessary evil that is required to circumvent computational complexity. We present a case study in approximate mechanism design without money. In our basic setting agents are located on the real line and the mechanism must select the location of a public facility; the cost of an agent is its distance to the facility. We establish tight upper and lower bounds for the approximation ratio given by strategyproof mechanisms without payments, with respect to both deterministic and randomized mechanisms, under two objective functions: the social cost, and the maximum cost. We then extend our results in two natural directions: a domain where two facilities must be located, and a domain where each agent controls multiple locations.
Approximately Optimal Mechanism Design via Differential Privacy ∗
, 1004
"... In this paper we study the implementation challenge in an abstract interdependent values model and an arbitrary objective function. We design a mechanism that allows for approximate optimal implementation of insensitive objective functions in expost Nash equilibrium. If, furthermore, values are pri ..."
Abstract

Cited by 37 (1 self)
 Add to MetaCart
(Show Context)
In this paper we study the implementation challenge in an abstract interdependent values model and an arbitrary objective function. We design a mechanism that allows for approximate optimal implementation of insensitive objective functions in expost Nash equilibrium. If, furthermore, values are private then the same mechanism is strategy proof. We cast our results onto two specific models: pricing and facility location. The mechanism we design is optimal up to an additive factor of the order of magnitude of one over the square root of the number of agents and involves no utility transfers. Underlying our mechanism is a lottery between two auxiliary mechanisms — with high probability we actuate a mechanism that reduces players influence on the choice of the social alternative, while choosing the optimal outcome with high probability. This is where the recent notion of differential privacy is employed. With the complementary probability we actuate a mechanism that is typically far from optimal but is incentive compatible. The joint mechanism inherits the desired properties from both. We thank Amos Fiat and Haim Kaplan for discussions at an early stage of this research. We thank Frank McSherry and Kunal Talwar for helping to clarify issues related to the constructions in [22]. Finally, we thank Jason Hartline,
Sum of Us: Strategyproof Selection from the Selectors
"... We consider directed graphs over a set of n agents, where an edge (i, j) is taken to mean that agent i supports or trusts agent j. Given such a graph and an integer k ≤ n, we wish to select a subset of k agents that maximizes the sum of indegrees, i.e., a subset of k most popular or most trusted age ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
(Show Context)
We consider directed graphs over a set of n agents, where an edge (i, j) is taken to mean that agent i supports or trusts agent j. Given such a graph and an integer k ≤ n, we wish to select a subset of k agents that maximizes the sum of indegrees, i.e., a subset of k most popular or most trusted agents. At the same time we assume that each individual agent is only interested in being selected, and may misreport its outgoing edges to this end. This problem formulation captures realistic scenarios where agents choose among themselves, which can be found in the context of Internet search, social networks like Twitter, or reputation systems like Epinions. Our goal is to design mechanisms without payments that map each graph to a ksubset of agents to be selected and satisfy the following two constraints: strategyproofness, i.e., agents cannot benefit from misreporting their outgoing edges, and approximate optimality, i.e., the sum of indegrees of the selected subset of agents is always close to optimal. Our first main result is a surprising impossibility: for k ∈ {1,...,n − 1}, no deterministic strategyproof mechanism can provide a finite approximation ratio. Our second main result is a randomized strategyproof mechanism with an approximation ratio that
Is privacy compatible with truthfulness
 In Proceedings of the 4th conference on Innovations in Theoretical Computer Science. ACM
"... In the area of privacypreserving data mining, a differentially private mechanism intuitively encourages people to share their data truthfully because they are at little risk of revealing their own information. However, we argue that this interpretation is incomplete because external incentives are ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
In the area of privacypreserving data mining, a differentially private mechanism intuitively encourages people to share their data truthfully because they are at little risk of revealing their own information. However, we argue that this interpretation is incomplete because external incentives are necessary for people to participate in databases, and so data release mechanisms should not only be differentially private but also compatible with those incentives, otherwise the data collected may be false. We apply the notion of truthfulness from game theory. In certain settings, it turns out that existing differentially private mechanisms do not encourage participants to report their information truthfully. On the positive side, we exhibit a transformation that takes truthful mechanisms and transforms them into differentially private mechanisms that remain truthful. Our transformation applies to games where the type space is small and the goal is to optimize an insensitive quantity such as social welfare. Our transformation incurs only a small additive loss in optimality, and it is computationally efficient. Combined with the VCG mechanism, our transformation implies that there exists a differentially private, truthful, and approximately efficient mechanism for any social welfare game with small type space. We also study a model where an explicit numerical cost is assigned to the information leaked by a mechanism. We show that in this case, even differential privacy may not be strong enough of a notion to motivate people to participate truthfully. We show that mechanisms that release a perturbed histogram of the database may reveal too much information. We also show that, in general, any mechanism that outputs a synopsis that resembles the original database (such as the mechanism of Blum et al. (STOC ’08)) may reveal too much information. Of independent interest, one corollary of our techniques is a new lower bound on the sample complexity of differentially private noninteractive synopsis generators.
Scheduling without payments
 In SAGT
, 2011
"... We consider mechanisms without payments for the problem of scheduling unrelated machines. Specifically, we consider truthful in expectation randomized mechanisms under the assumption that a machine (player) is bound by its reports: when a machine lies and reports value ˜ti j for a task instead of th ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
We consider mechanisms without payments for the problem of scheduling unrelated machines. Specifically, we consider truthful in expectation randomized mechanisms under the assumption that a machine (player) is bound by its reports: when a machine lies and reports value ˜ti j for a task instead of the actual one ti j, it will execute for time ˜ti j if it gets the task—unless the declared value ˜ti j is less than the actual value ti j, in which case, it will execute for time ti j. Our main technical result is an optimal mechanism for one task and n players which has approximation ratio (n + 1)/2. We also provide a matching lower bound, showing that no other truthful mechanism can achieve a better approximation ratio. This immediately gives an approximation ratio of (n + 1)/2 and n(n + 1)/2 for social cost and makespan minimization, respectively, for any number of tasks. 1
Multidimensional Singlepeaked Consistency and its Approximations
"... Singlepeakedness is one of the most commonly used domain restrictions in social choice. However, the extent to which agent preferences are singlepeaked in practice, and the extent to which recent proposals for approximate singlepeakedness can further help explain voter preferences, is unclear. In ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Singlepeakedness is one of the most commonly used domain restrictions in social choice. However, the extent to which agent preferences are singlepeaked in practice, and the extent to which recent proposals for approximate singlepeakedness can further help explain voter preferences, is unclear. In this article, we assess the ability of both singledimensional and multidimensional approximations to explain preference profiles drawn from several realworld elections. We develop a simple branchandbound algorithm that finds multidimensional, singlepeaked axes that best fit a given profile, and which works with several forms of approximation. Empirical results on two election data sets show that preferences in these elections are far from singlepeaked in any onedimensional space, but are nearly singlepeaked in two dimensions. Our algorithms are reasonably efficient in practice, and also show excellent anytime performance. 1
Strategyproof mechanisms for facility location games with many facilities
 In Proc. 2nd Intl. Conference on Algorithmic Decision Theory (ADT11), pp.67–81, Piscataway, NJ
, 2011
"... Abstract. This paper is devoted to the location of public facilities in a metric space. Selfish agents are located in this metric space, and their aim is to minimize their own cost, which is the distance from their location to the nearest facility. A central authority has to locate the facilities in ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
Abstract. This paper is devoted to the location of public facilities in a metric space. Selfish agents are located in this metric space, and their aim is to minimize their own cost, which is the distance from their location to the nearest facility. A central authority has to locate the facilities in the space, but she is ignorant of the true locations of the agents. The agents will therefore report their locations, but they may lie if they have an incentive to do it. We consider two social costs in this paper: the sum of the distances of the agents to their nearest facility, or the maximal distance of an agent to her nearest facility. We are interested in designing strategyproof mechanisms that have a small approximation ratio for the considered social cost. A mechanism is strategyproof if no agent has an incentive to report false information. In this paper, we design strategyproof mechanisms to locate n − 1 facilities for n agents. We study this problem in the general metric and in the tree metric spaces. We provide lower and upper bounds on the approximation ratio of deterministic and randomized strategyproof mechanisms.
Analysis and Optimization of Multidimensional Percentile Mechanisms
"... We consider the mechanism design problem for agents with singlepeaked preferences over multidimensional domains when multiple alternatives can be chosen. Facility location and committee selection are classic embodiments of this problem. We propose a class of percentile mechanisms, a form of genera ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
We consider the mechanism design problem for agents with singlepeaked preferences over multidimensional domains when multiple alternatives can be chosen. Facility location and committee selection are classic embodiments of this problem. We propose a class of percentile mechanisms, a form of generalized median mechanisms, that are (group) strategyproof, and derive worstcase approximation ratios for social cost and maximum load for L1 and L2 cost models. More importantly, we propose a samplebased framework for optimizing the choice of percentiles relative to any prior distribution over preferences, while maintaining strategyproofness. Our empirical investigations, using social cost and maximum load as objectives, demonstrate the viability of this approach and the value of such optimized mechanisms visàvis mechanisms derived through worstcase analysis.
Strategyproof Classification
, 2011
"... Experts reporting the labels used by a learning algorithm cannot always be assumed to be truthful. We describe recent advances in the design and analysis of strategyproof mechanisms for binary classification, and their relation to other mechanism design problems. ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Experts reporting the labels used by a learning algorithm cannot always be assumed to be truthful. We describe recent advances in the design and analysis of strategyproof mechanisms for binary classification, and their relation to other mechanism design problems.