Results 1  10
of
32
On approximating optimal weighted lobbying, and frequency of correctness versus averagecase polynomial time
, 2007
"... Abstract. We investigate issues regarding two hard problems related to voting, the optimal weighted lobbying problem and the winner problem for Dodgson elections. Regarding the former, Christian et al. [2] showed that optimal lobbying is intractable in the sense of parameterized complexity. We pro ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
(Show Context)
Abstract. We investigate issues regarding two hard problems related to voting, the optimal weighted lobbying problem and the winner problem for Dodgson elections. Regarding the former, Christian et al. [2] showed that optimal lobbying is intractable in the sense of parameterized complexity. We provide an efficient greedy algorithm that achieves a logarithmic approximation ratio for this problem and even for a more general variant—optimal weighted lobbying. We prove that essentially no better approximation ratio than ours can be proven for this greedy algorithm. The problem of determining Dodgson winners is known to be complete for parallel access to NP [11]. Homan and Hemaspaandra [10] proposed an efficient greedy heuristic for finding Dodgson winners with a guaranteed frequency of success, and their heuristic is a “frequently selfknowingly correct algorithm. ” We prove that every distributional problem solvable in polynomial time on the average with respect to the uniform distribution has a frequently selfknowingly correct polynomialtime algorithm. Furthermore, we study some features of probability weight of correctness with respect to Procaccia and Rosenschein’s junta distributions [15]. 1
TypicalCase Challenges to Complexity Shields That Are Supposed to Protect Elections Against Manipulation and Control: A Survey
, 2012
"... In the context of voting, manipulation and control refer to attempts to influence the outcome of elections by either setting some of the votes strategically (i.e., by reporting untruthful preferences) or by altering the structure of elections via adding, deleting, or partitioning either candidates o ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
In the context of voting, manipulation and control refer to attempts to influence the outcome of elections by either setting some of the votes strategically (i.e., by reporting untruthful preferences) or by altering the structure of elections via adding, deleting, or partitioning either candidates or voters. Since by the celebrated Gibbard–Satterthwaite theorem (and other results expanding its scope) all reasonable voting systems are manipulable in principle and since many voting systems are in principle susceptible to many control types modeling natural control scenarios, much work has been done to use computational complexity as a shield to protect elections against manipulation and control. However, most of this work has yielded NPhardness results, showing that certain voting systems resist certain types of manipulation or control only in the worst case. The typical case, where votes are given according to some natural distribution, poses a serious challenge to such worstcase complexity results and is frequently open to successful manipulation or control attempts, despite the NPhardness of the corresponding problems. We survey some recent results on typicalcase challenges to worstcase complexity results for manipulation and control.
Computational Tractability: The View From Mars
 BULLETIN OF THE EUROPEAN ASSOCIATION OF THEORETICAL COMPUTER SCIENCE
"... We describe a point of view about the parameterized computational complexity framework in the broad context of one of the central issues of theoretical computer science as a field: the problem of systematically coping with computational intractability. Those already familiar with the basic ideas of ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
We describe a point of view about the parameterized computational complexity framework in the broad context of one of the central issues of theoretical computer science as a field: the problem of systematically coping with computational intractability. Those already familiar with the basic ideas of parameterized complexity will nevertheless find here something new: the emerging systematic connections between fixedparameter tractability techniques and the design of useful heuristic algorithms, and also perhaps the philosophical maturation of the parameterized complexity program.
An Efficient Local Search Method for Random 3Satisfiability
, 2003
"... We report on some exceptionally good results in the solution of randomly generated 3satisfiability instances using the "recordtorecord travel (RRT)" local search method. When this simple, but lessstudied algorithm is applied to random onemillion variable instances from the problem&apos ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
We report on some exceptionally good results in the solution of randomly generated 3satisfiability instances using the "recordtorecord travel (RRT)" local search method. When this simple, but lessstudied algorithm is applied to random onemillion variable instances from the problem's satisfiable phase, it seems to find satisfying truth assignments almost always in linear time, with the coefficient of linearity depending on the ratio &alpha; of clauses to variables in the generated instances. RRT has a parameter for tuning "greediness". By lessening greediness, the linear time phase can be extended up to very close to the satisfiability threshold &alpha;_c. Such linear time complexity is typical for randomwalk based local search methods for small values of &alpha;. Previously, however, it has been suspected that these methods necessarily lose their time linearity far below the satisfiability threshold. The only previously introduced algorithm reported to have nearly linear time complexity also close to the satisfiability threshold is the survey propagation (SP) algorithm. However, SP is not a local search method and is more complicated to implement than RRT. Comparative experiments with the WalkSAT local search algorithm show behavior somewhat similar to RRT, but with the linear time phase not extending quite as close to the satisfiability threshold.
Complete distributional problems, hard languages, and resourcebounded measure
 Theoretical Computer Science
, 2000
"... We say that a distribution µ is reasonable if there exists a constant s ≥ 0 such that µ({x  x  ≥ n}) = Ω ( 1 ns). We prove the following result, which suggests that all DistNPcomplete problems have reasonable distributions. If NP contains a DTIME(2 n)biimmune set, then every DistNPcomplete ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
We say that a distribution µ is reasonable if there exists a constant s ≥ 0 such that µ({x  x  ≥ n}) = Ω ( 1 ns). We prove the following result, which suggests that all DistNPcomplete problems have reasonable distributions. If NP contains a DTIME(2 n)biimmune set, then every DistNPcomplete set has a reasonable distribution. It follows from work of Mayordomo [May94] that the consequent holds if the pmeasure of NP is not zero. Cai and Selman [CS96] defined a modification and extension of Levin’s notion of average polynomial time to arbitrary timebounds and proved that if L is Pbiimmune, then L is distributionally hard, meaning, that for every polynomialtime computable distribution µ, the distributional problem (L, µ) is not polynomial on the µaverage. We prove the following results, which suggest that distributional hardness is closely related to more traditional notions of hardness. 1. If NP contains a distributionally hard set, then NP contains a Pimmune set. 2. There exists a language L that is distributionally hard but not Pbiimmune if and only if P contains a set that is immune to all Pprintable sets. The following corollaries follow readily 1. If the pmeasure of NP is not zero, then there exists a language L that is distributionally hard but not Pbiimmune. 2. If the p2measure of NP is not zero, then there exists a language L in NP that is distributionally hard but not Pbiimmune. 1
Truthtable closure and Turing closure of average polynomial time have different measures in EXP
 In Proceedings of the Eleventh Annual IEEE Conference on Computational Complexity
, 1996
"... Let PPcomp denote the sets that are solvable in polynomial time on average under every polynomialtime computable distribution on the instances. In this paper we show that the truthtable closure of PPcomp has measure 0 in EXP. Since, as we show, EXP is Turing reducible to PPcomp , the Turing clo ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Let PPcomp denote the sets that are solvable in polynomial time on average under every polynomialtime computable distribution on the instances. In this paper we show that the truthtable closure of PPcomp has measure 0 in EXP. Since, as we show, EXP is Turing reducible to PPcomp , the Turing closure has measure 1 in EXP and thus, PPcomp is an example of a subclass of E such that the closure under truthtable reduction and the closure under Turing reduction have different measures in EXP. Furthermore, it is shown that there exists a set A in PPcomp such that for every k, the class of sets L such that A is ktruthtable reducible to L has measure 0 in EXP. 1 Introduction A randomized problem (or distributional problem) is a pair consisting of a decision problem and a density function. A randomized decision problem (A; ¯) is solvable in average polynomial time ((A; ¯) is in AP) if there exists a deterministic Turing machine M such that A = L(M ) and TimeM , the running time of M ...
Using Depth to Capture AverageCase Complexity
, 2003
"... We give the rst characterization of Turing machines that run in polynomialtime on average. We show that a Turing machine M runs in average polynomialtime if for all inputs x the Turing machine uses time exponential in the computational depth of x, where the computational depth is a measure of ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
We give the rst characterization of Turing machines that run in polynomialtime on average. We show that a Turing machine M runs in average polynomialtime if for all inputs x the Turing machine uses time exponential in the computational depth of x, where the computational depth is a measure of the amount of \useful" information in x.
Efficient kernels for sentence pair classification
"... In this paper, we propose a novel class of graphs, the tripartite directed acyclic graphs (tDAGs), to model firstorder rule feature spaces for sentence pair classification. We introduce a novel algorithm for computing the similarity in firstorder rewrite rule feature spaces. Our algorithm is extre ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we propose a novel class of graphs, the tripartite directed acyclic graphs (tDAGs), to model firstorder rule feature spaces for sentence pair classification. We introduce a novel algorithm for computing the similarity in firstorder rewrite rule feature spaces. Our algorithm is extremely efficient and, as it computes the similarity of instances that can be represented in explicit feature spaces, it is a valid kernel function. 1
Complexity Classes
, 1998
"... This material was written for Chapter 27 of the CRC Handbook of Algorithms and Theory of Computation, edited by Mikhail Atallah. ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This material was written for Chapter 27 of the CRC Handbook of Algorithms and Theory of Computation, edited by Mikhail Atallah.