Results 1  10
of
52
Approximate Mechanism Design Without Money
, 2009
"... The literature on algorithmic mechanism design is mostly concerned with gametheoretic versions of optimization problems to which standard economic moneybased mechanisms cannot be applied efficiently. Recent years have seen the design of various truthful approximation mechanisms that rely on enforc ..."
Abstract

Cited by 68 (19 self)
 Add to MetaCart
(Show Context)
The literature on algorithmic mechanism design is mostly concerned with gametheoretic versions of optimization problems to which standard economic moneybased mechanisms cannot be applied efficiently. Recent years have seen the design of various truthful approximation mechanisms that rely on enforcing payments. In this paper, we advocate the reconsideration of highly structured optimization problems in the context of mechanism design. We explicitly argue for the first time that, in such domains, approximation can be leveraged to obtain truthfulness without resorting to payments. This stands in contrast to previous work where payments are ubiquitous, and (more often than not) approximation is a necessary evil that is required to circumvent computational complexity. We present a case study in approximate mechanism design without money. In our basic setting agents are located on the real line and the mechanism must select the location of a public facility; the cost of an agent is its distance to the facility. We establish tight upper and lower bounds for the approximation ratio given by strategyproof mechanisms without payments, with respect to both deterministic and randomized mechanisms, under two objective functions: the social cost, and the maximum cost. We then extend our results in two natural directions: a domain where two facilities must be located, and a domain where each agent controls multiple locations.
Unweighted Coalitional Manipulation Under the Borda Rule Is NPHard
 PROCEEDINGS OF THE TWENTYSECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
"... The Borda voting rule is a positional scoring rule where, for m candidates, for every vote the first candidate receives m − 1 points, the second m − 2 points and so on. A Borda winner is a candidate with highest total score. It has been a prominent open problem to determine the computational complex ..."
Abstract

Cited by 32 (2 self)
 Add to MetaCart
The Borda voting rule is a positional scoring rule where, for m candidates, for every vote the first candidate receives m − 1 points, the second m − 2 points and so on. A Borda winner is a candidate with highest total score. It has been a prominent open problem to determine the computational complexity of UNWEIGHTED COALITIONAL MANIPULATION UNDER BORDA: Can one add a certain number of additional votes (called manipulators) to an election such that a distinguished candidate becomes a winner? We settle this open problem by showing NPhardness even for two manipulators and three input votes. Moreover, we discuss extensions and limitations of this hardness result.
Optimal social choice functions: A utilitarian view
 In Proceedings of the Thirteenth ACM Conference on Electronic Commerce (EC’12
, 2012
"... We adopt a utilitarian perspective on social choice, assuming that agents have (possibly latent) utility functions over some space of alternatives. For many reasons one might consider mechanisms, or social choice functions, that only have access to the ordinal rankings of alternatives by the individ ..."
Abstract

Cited by 19 (9 self)
 Add to MetaCart
We adopt a utilitarian perspective on social choice, assuming that agents have (possibly latent) utility functions over some space of alternatives. For many reasons one might consider mechanisms, or social choice functions, that only have access to the ordinal rankings of alternatives by the individual agents rather than their utility functions. In this context, one possible objective for a social choice function is the maximization of (expected) social welfare relative to the information contained in these rankings. We study such optimal social choice functions under three different models, and underscore the important role played by scoring functions. In our worstcase model, no assumptions are made about the underlying distribution and we analyze the worstcase distortion—or degree to which the selected alternative does not maximize social welfare—of optimal social choice functions. In our averagecase model, we derive optimal functions under neutral (or impartial culture) distributional models. Finally, a very general learningtheoretic model allows for the computation of optimal social choice functions (i.e., that maximize expected social welfare) under arbitrary, sampleable distributions. In the latter case, we provide both algorithms and sample complexity results for the class of scoring functions, and further validate the approach empirically. 1.
On the computation of fully proportional representation
 JOURNAL OF AI RESEARCH
, 2013
"... We investigate two systems of fully proportional representation suggested by Chamberlin & Courant and Monroe. Both systems assign a representative to each voter so that the “sum of misrepresentations” is minimized. The winner determination problem for both systems is known to be NPhard, hence t ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
We investigate two systems of fully proportional representation suggested by Chamberlin & Courant and Monroe. Both systems assign a representative to each voter so that the “sum of misrepresentations” is minimized. The winner determination problem for both systems is known to be NPhard, hence this work aims at investigating whether there are variants of the proposed rules and/or specific electorates for which these problems can be solved efficiently. As a variation of these rules, instead of minimizing the sum of misrepresentations, we considered minimizing the maximalmisrepresentationintroducingeffectively two new rules. In the general case these “minimax ” versions of classical rules appeared to be still NPhard. We investigated the parameterized complexity of winner determination of the two classical and two new rules with respect to several parameters. Here we have a mixture of positive and negative results: e.g., we proved fixedparameter tractability for the parameter the number of candidates but fixedparameter intractability for the number of winners. For singlepeaked electorates our results are overwhelmingly positive: we provide polynomialtime algorithms for most of the considered problems. The only rule that remains NPhard for singlepeaked electorates is the classical Monroe rule. 1.
On the Complexity of Voting Manipulation under Randomized TieBreaking
 PROCEEDINGS OF THE TWENTYSECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
"... Computational complexity of voting manipulation is one of the most actively studied topics in the area of computational social choice, starting with the groundbreaking work of [Bartholdi et al., 1989]. Most of the existing work in this area, including that of [Bartholdi et al., 1989], implicitly ass ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
Computational complexity of voting manipulation is one of the most actively studied topics in the area of computational social choice, starting with the groundbreaking work of [Bartholdi et al., 1989]. Most of the existing work in this area, including that of [Bartholdi et al., 1989], implicitly assumes that whenever several candidates receive the top score with respect to the given voting rule, the resulting tie is broken according to a lexicographic ordering over the candidates. However, till recently, an equally appealing method of tiebreaking, namely, selecting the winner uniformly at random among all tied candidates, has not been considered in the computational social choice literature. The first paper to analyze the complexity of voting manipulation under randomized tiebreaking is [Obraztsova et al., 2011], where the authors provide polynomialtime algorithms for this problem under scoring rules and—under an additional assumption on the manipulator’s utilities— for Maximin. In this paper, we extend the results of [Obraztsova et al., 2011] by showing that finding an optimal vote under randomized tiebreaking is computationally hard for Copeland and Maximin (with general utilities), as well as for STV and Ranked Pairs, but easy for the Bucklin rule and Plurality with Runoff.
Computing the Margin of Victory for Various Voting Rules
, 2012
"... The margin of victory of an election, defined as the smallest number k such that k voters can change the winner by voting differently, is an important measurement for robustness of the election outcome. It also plays an important role in implementing efficient postelection audits, which has been wi ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
The margin of victory of an election, defined as the smallest number k such that k voters can change the winner by voting differently, is an important measurement for robustness of the election outcome. It also plays an important role in implementing efficient postelection audits, which has been widely used in the United States to detect errors or fraud caused by malfunctions of electronic voting machines. In this paper, we investigate the computational complexity and (in)approximability of computing the margin of victory for various voting rules, including approval voting, all positional scoring rules (which include Borda, plurality, and veto), plurality with runoff, Bucklin, Copeland, maximin, STV, and ranked pairs. We also prove a dichotomy theorem, which states that for all continuous generalized scoring rules, including all voting rules studied in this paper, either with high probability the margin of victory is Θ ( √ n), or with high probability the margin of victory is Θ(n), wherenis the number of voters. Most of our results are quite positive, suggesting that the margin of victory can be efficiently computed. This sheds some light on designing efficient postelection audits for voting rules beyond the plurality rule.
Possible Winners When New Alternatives Join: New Results Coming Up!
"... In a voting system, sometimes multiple new alternatives will join the election after the voters’ preferences over the initial alternatives have been revealed. Computing whether a given alternative can be a cowinner when multiple new alternatives join the election is called the possible cowinner wi ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
(Show Context)
In a voting system, sometimes multiple new alternatives will join the election after the voters’ preferences over the initial alternatives have been revealed. Computing whether a given alternative can be a cowinner when multiple new alternatives join the election is called the possible cowinner with new alternatives (PcWNA) problem, introduced by Chevaleyre et al. [4, 5]. In this paper, we show that the PcWNA problems are NPcomplete for the Bucklin, Copeland0, and Simpson (a.k.a. maximin) rule, even when the number of new alternatives is no more than a constant. We also show that the PcWNA problem can be solved in polynomial time for plurality with runoff. For the approval rule, we define three different ways to extend a linear order with new alternatives, and characterize the computational complexity of the PcWNA problem for each of them. 1
New Candidates Welcome! Possible Winners with respect to the Addition of New Candidates
, 2010
"... In some voting contexts, some new candidates may show up in the course of the process. In this case, we may want to determine which of the initial candidates are possible winners, given that a fixed number k of new candidates will be added. We give a computational study of the latter problem, focusi ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
In some voting contexts, some new candidates may show up in the course of the process. In this case, we may want to determine which of the initial candidates are possible winners, given that a fixed number k of new candidates will be added. We give a computational study of the latter problem, focusing on scoring rules, and we give a formal comparison with related problems such as control via adding candidates or cloning.
Computational aspects of nearly singlepeaked electorates
 In Proceedings of the 26th AAAI Conference on Artificial Intelligence
, 2013
"... Manipulation, bribery, and control are wellstudied ways of changing the outcome of an election. Many voting systems are, in the general case, computationally resistant to some of these manipulative actions. However when restricted to singlepeaked electorates, these systems suddenly become easy to ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
Manipulation, bribery, and control are wellstudied ways of changing the outcome of an election. Many voting systems are, in the general case, computationally resistant to some of these manipulative actions. However when restricted to singlepeaked electorates, these systems suddenly become easy to manipulate. Recently, Faliszewski, Hemaspaandra, and Hemaspaandra (2011b) studied the complexity of dishonest behavior in nearly singlepeaked electorates. These are electorates that are not singlepeaked but close to it according to some distance measure. In this paper we introduce several new distance measures regarding singlepeakedness. We prove that determining whether a given profile is nearly singlepeaked is NPcomplete in many cases. For one case we present a polynomialtime algorithm. Furthermore, we explore the relations between several notions of nearly singlepeakedness.