Results 11  20
of
553
Nonexistence of voting rules that are usually hard to manipulate
 IN AAAI
, 2006
"... Aggregating the preferences of selfinterested agents is a key problem for multiagent systems, and one general method for doing so is to vote over the alternatives (candidates). Unfortunately, the GibbardSatterthwaite theorem shows that when there are three or more candidates, all reasonable votin ..."
Abstract

Cited by 88 (8 self)
 Add to MetaCart
Aggregating the preferences of selfinterested agents is a key problem for multiagent systems, and one general method for doing so is to vote over the alternatives (candidates). Unfortunately, the GibbardSatterthwaite theorem shows that when there are three or more candidates, all reasonable voting rules are manipulable (in the sense that there exist situations in which a voter would benefit from reporting its preferences insincerely). To circumvent this impossibility result, recent research has investigated whether it is possible to make finding a beneficial manipulation computationally hard. This approach has had some limited success, exhibiting rules under which the problem of finding a beneficial manipulation is NPhard, #Phard, or even PSPACEhard. Thus, under these rules, it is unlikely that a computationally efficient algorithm can be constructed that always finds a beneficial manipulation (when it exists). However, this still does not preclude the existence of an efficient algorithm that often finds a successful manipulation (when it exists). There have been attempts to design a rule under which finding a beneficial manipulation is usually hard, but they have failed. To explain this failure, in this paper, we show that it is in fact impossible to design such a rule, if the rule is also required to satisfy another property: a large fraction of the manipulable instances are both weakly monotone, and allow the manipulators to make either of exactly two candidates win. We argue why one should expect voting rules to have this property, and show experimentally that common voting rules clearly satisfy it. We also discuss approaches for potentially circumventing this impossibility result.
The theory of implementation in Nash equilibrium: A survey. In: Hurwicz,
, 1985
"... The theory of implementation concerns the problem of designing game forms (sometimes called "mechanisms" or "outcome functions") the equilibria of which have properties that are desirable according to a specified criterion of social welfare called a social choice rule . A game f ..."
Abstract

Cited by 75 (4 self)
 Add to MetaCart
(Show Context)
The theory of implementation concerns the problem of designing game forms (sometimes called "mechanisms" or "outcome functions") the equilibria of which have properties that are desirable according to a specified criterion of social welfare called a social choice rule . A game form, in effect, decentralizes decisionmaking. The social alternative is selected by the joint actions of all individuals in society rather than by a central planner. Formally, a social choice rule assigns a set of alternatives to each profile of preferences (or other characteristics) that individuals in society might have; the set consists of the "welfare optima" relative to the preference profile. A game form is a rule that specifies an alternative (or outcome ) for each configuration of actions that individuals take. A game form implements (technically, fully implements) a social choice rule if, for each possible profile of preferences, the equilibrium outcomes of the game form coincide with the welfare optima of the social choice rule. Of course, the equilibrium set depends on the particular solution concept being used. Implementation theory has considered a variety of solution concepts, including equilibrium in dominant strategies, Bayesian equilibrium, and Hash equilibrium. Other chapters of this volume treat the first two equilibrium concepts. In the .main, this article is confined to implementation in Nash equilibrium, although it relates this theory to those of other solution concepts, dominant strategies in particular. Nash equilibrium is the noncooperative solution concept par excellence , and so it is not surprising that implementation theory should have employed it extensively. Nonetheless, one reason often advanced for the desirability
Elections Can be Manipulated Often
"... The GibbardSatterthwaite theorem states that every nontrivial voting method between at least 3 alternatives can be strategically manipulated. We prove a quantitative version of the GibbardSatterthwaite theorem: a random manipulation by a single random voter will succeed with nonnegligible probab ..."
Abstract

Cited by 66 (1 self)
 Add to MetaCart
The GibbardSatterthwaite theorem states that every nontrivial voting method between at least 3 alternatives can be strategically manipulated. We prove a quantitative version of the GibbardSatterthwaite theorem: a random manipulation by a single random voter will succeed with nonnegligible probability for every neutral voting method between 3 alternatives that is far from being a dictatorship.
Generalized scoring rules and the frequency of coalitional manipulability
 In Proceedings of the Ninth ACM Conference on Electronic Commerce (EC
, 2008
"... We introduce a class of voting rules called generalized scoring rules. Under such a rule, each vote generates a vector of k scores, and the outcome of the voting rule is based only on the sum of these vectors—more specifically, only on the order (in terms of score) of the sum’s components. This clas ..."
Abstract

Cited by 66 (20 self)
 Add to MetaCart
(Show Context)
We introduce a class of voting rules called generalized scoring rules. Under such a rule, each vote generates a vector of k scores, and the outcome of the voting rule is based only on the sum of these vectors—more specifically, only on the order (in terms of score) of the sum’s components. This class is extremely general: we do not know of any commonly studied rule that is not a generalized scoring rule. We then study the coalitional manipulation problem for generalized scoring rules. We prove that under certain natural assump), then tions, if the number of manipulators is O(n p) (for any p < 1 2 the probability that a random profile is manipulable is O(n p − 1 2), where n is the number of voters. We also prove that under another set of natural assumptions, if the number of manipulators is Ω(n p) (for any p> 1) and o(n), then the probability that a random pro2 file is manipulable (to any possible winner under the voting rule) is 1 − O(e −Ω(n2p−1)). We also show that common voting rules satisfy these conditions (for the uniform distribution). These results generalize earlier results by Procaccia and Rosenschein as well as even earlier results on the probability of an election being tied.
Determining possible and necessary winners under common voting rules given partial orders.
 In Proceedings of the National Conference on Artificial Intelligence (AAAI),
, 2008
"... Abstract Usually a voting rule requires agents to give their preferences as linear orders. However, in some cases it is impractical for an agent to give a linear order over all the alternatives. It has been suggested to let agents submit partial orders instead. Then, given a voting rule, a profile ..."
Abstract

Cited by 63 (11 self)
 Add to MetaCart
(Show Context)
Abstract Usually a voting rule requires agents to give their preferences as linear orders. However, in some cases it is impractical for an agent to give a linear order over all the alternatives. It has been suggested to let agents submit partial orders instead. Then, given a voting rule, a profile of partial orders, and an alternative (candidate) c, two important questions arise: first, is it still possible for c to win, and second, is c guaranteed to win? These are the possible winner and necessary winner problems, respectively. Each of these two problems is further divided into two subproblems: determining whether c is a unique winner (that is, c is the only winner), or determining whether c is a cowinner (that is, c is in the set of winners). We consider the setting where the number of alternatives is unbounded and the votes are unweighted. We completely characterize the complexity of possible/necessary winner problems for the following common voting rules: a class of positional scoring rules (including Borda), Copeland, maximin, Bucklin, ranked pairs, voting trees, and plurality with runoff.
Llull and Copeland voting computationally resist bribery and control
, 2009
"... Control and bribery are settings in which an external agent seeks to influence the outcome of an election. Constructive control of elections refers to attempts by an agent to, via such actions as addition/deletion/partition of candidates or voters, ensure that a given candidate wins. Destructive con ..."
Abstract

Cited by 63 (30 self)
 Add to MetaCart
Control and bribery are settings in which an external agent seeks to influence the outcome of an election. Constructive control of elections refers to attempts by an agent to, via such actions as addition/deletion/partition of candidates or voters, ensure that a given candidate wins. Destructive control refers to attempts by an agent to, via the same actions, preclude a given candidate’s victory. An election system in which an agent can sometimes affect the result and it can be determined in polynomial time on which inputs the agent can succeed is said to be vulnerable to the given type of control. An election system in which an agent can sometimes affect the result, yet in which it is NPhard to recognize the inputs on which the agent can succeed, is said to be resistant to the given type of control. Aside from election systems with an NPhard winner problem, the only systems previously known to be resistant to all the standard control types were highly artificial election systems created by hybridization. This paper studies a parameterized version of Copeland voting, denoted by Copeland α, where the parameter α is a rational number between 0 and 1 that specifies how ties are valued in the pairwise comparisons of candidates. In every previously studied constructive or destructive
Dichotomy for voting systems
 Journal of Computer and System Sciences
"... Scoring protocols are a broad class of voting systems. Each is defined by a vector (α1, α2,..., αm), α1 ≥ α2 ≥ · · · ≥ αm, of integers such that each voter contributes α1 points to his/her first choice, α2 points to his/her second choice, and so on, and any candidate receiving the most points is ..."
Abstract

Cited by 62 (18 self)
 Add to MetaCart
Scoring protocols are a broad class of voting systems. Each is defined by a vector (α1, α2,..., αm), α1 ≥ α2 ≥ · · · ≥ αm, of integers such that each voter contributes α1 points to his/her first choice, α2 points to his/her second choice, and so on, and any candidate receiving the most points is a winner. What is it about scoringprotocol election systems that makes some have the desirable property of being NPcomplete to manipulate, while others can be manipulated in polynomial time? We find the complete, dichotomizing answer: Diversity of dislike. Every scoringprotocol election system having two or more point values assigned to candidates other than the favorite—i.e., having {αi 2 ≤ i ≤ m}  ≥ 2—is NPcomplete to manipulate. Every other scoringprotocol election system can be manipulated in polynomial time. In effect, we show that—other than trivial systems (where all candidates alway tie), plurality voting, and plurality voting’s transparently disguised translations—every scoringprotocol election system is NPcomplete to manipulate. 1
Compilation complexity of common voting rules
, 2010
"... In computational social choice, one important problem is to take the votes of a subelectorate (subset of the voters), and summarize them using a small number of bits. This needs to be done in such a way that, if all that we know is the summary, as well as the votes of voters outside the subelectorat ..."
Abstract

Cited by 58 (13 self)
 Add to MetaCart
(Show Context)
In computational social choice, one important problem is to take the votes of a subelectorate (subset of the voters), and summarize them using a small number of bits. This needs to be done in such a way that, if all that we know is the summary, as well as the votes of voters outside the subelectorate, we can conclude which of the m alternatives wins. This corresponds to the notion of compilation complexity, the minimum number of bits required to summarize the votes for a particular rule, which was introduced by Chevaleyre et al. [IJCAI09]. We study three different types of compilation complexity. The first, studied by Chevaleyre et al., depends on the size of the subelectorate but not on the size of the complement (the voters outside the subelectorate). The second depends on the size of the complement but not on the size of the subelectorate. The third depends on both. We first investigate the relations among the three types of compilation complexity. Then, we give upper and lower bounds on all three types of compilation complexity for the most prominent voting rules. We show that for lapproval (when l ≤ m/2), Borda, and Bucklin, the bounds for all three types are asymptotically tight, up to a multiplicative constant; for lapproval (when l> m/2), plurality with runoff, all Condorcet consistent rules that are based on unweighted majority graphs (including Copeland and voting trees), and all Condorcet consistent rules that are based on the order of pairwise elections (including ranked pairs and maximin), the bounds for all three types are asymptotically tight up to a multiplicative constant when the sizes of the subelectorate and its complement are both larger than m 1+ǫ for some ǫ> 0.