Results 1  10
of
21
On the tradeoff between economic efficiency and strategyproofness in randomized social choice
 IN PROCEEDINGS OF THE 12TH INTERNATIONAL JOINT CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS (AAMAS). IFAAMAS, 2013. FORTHCOMING
, 2013
"... Two fundamental notions in microeconomic theory are efficiency—no agent can be made better off without making another one worse off—and strategyproofness—no agent can obtain a more preferred outcome by misrepresenting his preferences. When social outcomes are probability distributions (or lotteries) ..."
Abstract

Cited by 18 (13 self)
 Add to MetaCart
(Show Context)
Two fundamental notions in microeconomic theory are efficiency—no agent can be made better off without making another one worse off—and strategyproofness—no agent can obtain a more preferred outcome by misrepresenting his preferences. When social outcomes are probability distributions (or lotteries) over alternatives, there are varying degrees of these notions depending on how preferences over alternatives are extended to preference over lotteries. We show that efficiency and strategyproofness are incompatible to some extent when preferences are defined using stochastic dominance (SD) and therefore introduce a natural weakening of SD based on Savage’s surething principle (ST). While random serial dictatorship is SDstrategyproof, it only satisfies STefficiency. Our main result is that strict maximal lotteries—an appealing class of social decision schemes due to Kreweras and Fishburn—satisfy SDefficiency and STstrategyproofness.
Approximately StrategyProof Voting
 PROCEEDINGS OF THE TWENTYSECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
, 2011
"... The classic GibbardSatterthwaite Theorem establishes that only dictatorial voting rules are strategyproof; under any other voting rule, players have an incentive to lie about their true preferences. We consider a new approach for circumventing this result: we consider randomized voting rules that o ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
The classic GibbardSatterthwaite Theorem establishes that only dictatorial voting rules are strategyproof; under any other voting rule, players have an incentive to lie about their true preferences. We consider a new approach for circumventing this result: we consider randomized voting rules that only approximate a deterministic voting rule and only are approximately strategyproof. We show that any deterministic voting rule can be approximated by an approximately strategyproof randomized voting rule, and we provide asymptotically tight lower bounds on the parameters required by such voting rules.
How Bad is Selfish Voting?
"... It is well known that strategic behavior in elections is essentially unavoidable; we therefore ask: how bad can the rational outcome be? We answer this question via the notion of the price of anarchy, using the scores of alternatives as a proxy for their quality and bounding the ratio between the sc ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
It is well known that strategic behavior in elections is essentially unavoidable; we therefore ask: how bad can the rational outcome be? We answer this question via the notion of the price of anarchy, using the scores of alternatives as a proxy for their quality and bounding the ratio between the score of the optimal alternative and the score of the winning alternative in Nash equilibrium. Specifically, we are interested in Nash equilibria that are obtained via sequences of rational strategic moves. Focusing on three common voting rules — plurality, veto, and Borda — we provide very positive results for plurality and very negative results for Borda, and place veto in the middle of this spectrum. 1
Empirical Analysis of Plurality Election Equilibria
, 2013
"... Voting is widely used to aggregate the different preferences of agents, even though these agents are often able to manipulate the outcome through strategic voting. Most research on manipulation of voting methods studies (1) limited solution concepts, (2) limited preferences, or (3) scenarios with a ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
Voting is widely used to aggregate the different preferences of agents, even though these agents are often able to manipulate the outcome through strategic voting. Most research on manipulation of voting methods studies (1) limited solution concepts, (2) limited preferences, or (3) scenarios with a few manipulators that have a common goal. In contrast, we study voting in plurality elections through the lens of Nash equilibrium, which allows for the possibility that any number of agents, with arbitrary different goals, could all be manipulators. This is possible thanks to recent advances in (Bayes)Nash equilibrium computation for large games. Although plurality has numerous purestrategy Nash equilibria, we demonstrate how a simple equilibrium refinement— assuming that agents only deviate from truthfulness when it will change the outcome—dramatically reduces this set. We also use symmetric BayesNash equilibria to investigate the case where voters are uncertain of each others ’ preferences. This refinement does not completely eliminate the problem of multiple equilibria. However, it does show that even when agents manipulate, plurality still tends to lead to good outcomes (e.g., Condorcet winners, candidates that would win if voters were truthful, outcomes with high social welfare).
On the Incompatibility of Efficiency and Strategyproofness in Randomized Social Choice
"... Efficiency—no agent can be made better off without making another one worse off—and strategyproofness—no agent can obtain a more preferred outcome by misrepresenting his preferences—are two cornerstones of economics and ubiquitous in important areas such as voting, auctions, or matching markets. W ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
Efficiency—no agent can be made better off without making another one worse off—and strategyproofness—no agent can obtain a more preferred outcome by misrepresenting his preferences—are two cornerstones of economics and ubiquitous in important areas such as voting, auctions, or matching markets. Within the context of random assignment, Bogomolnaia and Moulin have shown that two particular notions of efficiency and strategyproofness based on stochastic dominance are incompatible. However, there are various other possibilities of lifting preferences over alternatives to preferences over lotteries apart from stochastic dominance. In this paper, we give an overview of common preference extensions, propose two new ones, and show that the abovementioned incompatibility can be extended to various other notions of strategyproofness and efficiency in randomized social choice. 1
Approximating common voting rules by sequential voting in multiissue domains. http://people.seas.harvard.edu/ ∼lxia/Files/approx10.pdf
, 2012
"... When agents need to make decisions on multiple issues, one solution is to vote on the issues sequentially. In this paper, we investigate how well the winner under the sequential voting process approximates the winners under some common voting rules. Some common voting rules, including Borda, kappr ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
When agents need to make decisions on multiple issues, one solution is to vote on the issues sequentially. In this paper, we investigate how well the winner under the sequential voting process approximates the winners under some common voting rules. Some common voting rules, including Borda, kapproval, Copeland, maximin, Bucklin, and Dodgson, admit natural scoring functions that can serve as a basis for approximation results. We focus on multiissue domains where each issue is binary and the agents ’ preferences are Olegal, separable, represented by LPtrees, or lexicographic. Our results show significant improvements in the approximation ratios when the preferences are represented by LPtrees, compared to the approximation ratios when the preferences are Olegal. However, assuming that the preferences are separable (respectively, lexicographic) does not significantly improve the approximation ratios compared to the case where the preferences are Olegal (respectively, are represented by LPtrees).
Incentives for Participation and Abstention in Probabilistic Social Choice
"... Voting rules are powerful tools that allow multiple agents to aggregate their preferences in order to reach joint decisions. A common flaw of some voting rules, known as the noshow paradox, is that agents may obtain a more preferred outcome by abstaining an election. Whenever a rule does not suffer ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Voting rules are powerful tools that allow multiple agents to aggregate their preferences in order to reach joint decisions. A common flaw of some voting rules, known as the noshow paradox, is that agents may obtain a more preferred outcome by abstaining an election. Whenever a rule does not suffer from this paradox, it is said to satisfy participation. In this paper, we initiate the study of participation in probabilistic social choice, i.e., for voting rules that yield probability distributions over alternatives. We consider three degrees of participation based on expected utility, the strongest of which even requires that an agent is strictly better off by participating at an election. While the latter condition is prohibitive in nonprobabilistic social choice, we show that it can be met by reasonable probabilistic functions. More generally, we study to which extent participation and Pareto efficiency are compatible. To the best of our knowledge, this is the first work in this direction.
A Dynamic Rationalization of Distance Rationalizability
"... Distance rationalizability is an intuitive paradigm for developing and studying voting rules: given a notion of consensus and a distance function on preference profiles, a rationalizable voting rule selects an alternative that is closest to being a consensus winner. Despite its appeal, distance rati ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Distance rationalizability is an intuitive paradigm for developing and studying voting rules: given a notion of consensus and a distance function on preference profiles, a rationalizable voting rule selects an alternative that is closest to being a consensus winner. Despite its appeal, distance rationalizability faces the challenge of connecting the chosen distance measure and consensus notion to an operational measure of social desirability. We tackle this issue via the decisiontheoretic framework of dynamic social choice, in which a social choice Markov decision process (MDP) models the dynamics of voter preferences in response to winner selection. We show that, for a prominent class of distance functions, one can construct a social choice MDP, with natural preference dynamics and rewards, such that a voting rule is (votewise) rationalizable with respect to the unanimity consensus for a given distance function iff it is a (deterministic) optimal policy in the MDP. This provides an alternative rationale for distance rationalizability, demonstrating the equivalence of rationalizable voting rules in a static sense and winner selection to maximize societal utility in a dynamic process. 1
Strategyproof Approximations of Distance Rationalizable Voting Rules
, 2012
"... This paper considers randomized strategyproof approximations to distance rationalizable voting rules. It is shown that the Random Dictator voting rule (return the top choice of a random voter) nontrivially approximates a large class of distances with respect to unanimity. Any randomized voting rule ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This paper considers randomized strategyproof approximations to distance rationalizable voting rules. It is shown that the Random Dictator voting rule (return the top choice of a random voter) nontrivially approximates a large class of distances with respect to unanimity. Any randomized voting rule that deviates too greatly from the Random Dictator voting rule is shown to obtain a trivial approximation (i.e., equivalent to ignoring the voters ’ votes and selecting an alternative uniformly at random). The outlook for consensus classes, other than unanimity is bleaker. This paper shows that for a large number of distance rationalizations, with respect to the majority and Condorcet consensus classes that no strategyproof randomized rule can asymptotically outperform uniform random selection of an alternative. This paper also shows that veto cannot be approximated nontrivially when approximations are measured with respect to minimizing the number of vetoes an alternative receives.
Maximal Recursive Rule: A New Social Decision Scheme
 PROCEEDINGS OF THE TWENTYTHIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
, 2013
"... In social choice settings with strict preferences, random dictatorship rules were characterized by Gibbard [1977] as the only randomized social choice functions that satisfy strategyproofness and ex post efficiency. In the more general domain with indifferences, RSD (random serial dictatorship) rule ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
In social choice settings with strict preferences, random dictatorship rules were characterized by Gibbard [1977] as the only randomized social choice functions that satisfy strategyproofness and ex post efficiency. In the more general domain with indifferences, RSD (random serial dictatorship) rules are the wellknown and perhaps only known generalization of random dictatorship. We present a new generalization of random dictatorship for indifferences called Maximal Recursive (MR) rule as an alternative to RSD. We show that MR is polynomialtime computable, weakly strategyproof with respect to stochastic dominance, and, in some respects, outperforms RSD on efficiency.