Results 1 - 10
of
32
On approximating optimal weighted lobbying, and frequency of correctness versus average-case polynomial time
, 2007
"... Abstract. We investigate issues regarding two hard problems related to voting, the optimal weighted lobbying problem and the winner prob-lem for Dodgson elections. Regarding the former, Christian et al. [2] showed that optimal lobbying is intractable in the sense of parameter-ized complexity. We pro ..."
Abstract
-
Cited by 17 (7 self)
- Add to MetaCart
(Show Context)
Abstract. We investigate issues regarding two hard problems related to voting, the optimal weighted lobbying problem and the winner prob-lem for Dodgson elections. Regarding the former, Christian et al. [2] showed that optimal lobbying is intractable in the sense of parameter-ized complexity. We provide an efficient greedy algorithm that achieves a logarithmic approximation ratio for this problem and even for a more general variant—optimal weighted lobbying. We prove that essentially no better approximation ratio than ours can be proven for this greedy algorithm. The problem of determining Dodgson winners is known to be complete for parallel access to NP [11]. Homan and Hemaspaandra [10] proposed an efficient greedy heuristic for finding Dodgson winners with a guar-anteed frequency of success, and their heuristic is a “frequently self-knowingly correct algorithm. ” We prove that every distributional prob-lem solvable in polynomial time on the average with respect to the uni-form distribution has a frequently self-knowingly correct polynomial-time algorithm. Furthermore, we study some features of probability weight of correctness with respect to Procaccia and Rosenschein’s junta distribu-tions [15]. 1
Typical-Case Challenges to Complexity Shields That Are Supposed to Protect Elections Against Manipulation and Control: A Survey
, 2012
"... In the context of voting, manipulation and control refer to attempts to influence the outcome of elections by either setting some of the votes strategically (i.e., by reporting untruthful preferences) or by altering the structure of elections via adding, deleting, or partitioning either candidates o ..."
Abstract
-
Cited by 13 (2 self)
- Add to MetaCart
In the context of voting, manipulation and control refer to attempts to influence the outcome of elections by either setting some of the votes strategically (i.e., by reporting untruthful preferences) or by altering the structure of elections via adding, deleting, or partitioning either candidates or voters. Since by the celebrated Gibbard–Satterthwaite theorem (and other results expanding its scope) all reasonable voting systems are manipulable in principle and since many voting systems are in principle susceptible to many control types modeling natural control scenarios, much work has been done to use computational complexity as a shield to protect elections against manipulation and control. However, most of this work has yielded NP-hardness results, showing that certain voting systems resist certain types of manipulation or control only in the worst case. The typical case, where votes are given according to some natural distribution, poses a serious challenge to such worst-case complexity results and is frequently open to successful manipulation or control attempts, despite the NP-hardness of the corresponding problems. We survey some recent results on typical-case challenges to worst-case complexity results for manipulation and control.
Computational Tractability: The View From Mars
- BULLETIN OF THE EUROPEAN ASSOCIATION OF THEORETICAL COMPUTER SCIENCE
"... We describe a point of view about the parameterized computational complexity framework in the broad context of one of the central issues of theoretical computer science as a field: the problem of systematically coping with computational intractability. Those already familiar with the basic ideas of ..."
Abstract
-
Cited by 11 (1 self)
- Add to MetaCart
We describe a point of view about the parameterized computational complexity framework in the broad context of one of the central issues of theoretical computer science as a field: the problem of systematically coping with computational intractability. Those already familiar with the basic ideas of parameterized complexity will nevertheless find here something new: the emerging systematic connections between fixed-parameter tractability techniques and the design of useful heuristic algorithms, and also perhaps the philosophical maturation of the parameterized complexity program.
An Efficient Local Search Method for Random 3-Satisfiability
, 2003
"... We report on some exceptionally good results in the solution of randomly generated 3-satisfiability instances using the "record-to-record travel (RRT)" local search method. When this simple, but less-studied algorithm is applied to random onemillion variable instances from the problem&apos ..."
Abstract
-
Cited by 10 (4 self)
- Add to MetaCart
We report on some exceptionally good results in the solution of randomly generated 3-satisfiability instances using the "record-to-record travel (RRT)" local search method. When this simple, but less-studied algorithm is applied to random onemillion variable instances from the problem's satisfiable phase, it seems to find satisfying truth assignments almost always in linear time, with the coefficient of linearity depending on the ratio α of clauses to variables in the generated instances. RRT has a parameter for tuning "greediness". By lessening greediness, the linear time phase can be extended up to very close to the satisfiability threshold α_c. Such linear time complexity is typical for random-walk based local search methods for small values of α. Previously, however, it has been suspected that these methods necessarily lose their time linearity far below the satisfiability threshold. The only previously introduced algorithm reported to have nearly linear time complexity also close to the satisfiability threshold is the survey propagation (SP) algorithm. However, SP is not a local search method and is more complicated to implement than RRT. Comparative experiments with the WalkSAT local search algorithm show behavior somewhat similar to RRT, but with the linear time phase not extending quite as close to the satisfiability threshold.
Complete distributional problems, hard languages, and resource-bounded measure
- Theoretical Computer Science
, 2000
"... We say that a distribution µ is reasonable if there exists a constant s ≥ 0 such that µ({x | |x | ≥ n}) = Ω ( 1 ns). We prove the following result, which suggests that all DistNP-complete problems have reasonable distributions. If NP contains a DTIME(2 n)-bi-immune set, then every DistNP-complete ..."
Abstract
-
Cited by 7 (2 self)
- Add to MetaCart
We say that a distribution µ is reasonable if there exists a constant s ≥ 0 such that µ({x | |x | ≥ n}) = Ω ( 1 ns). We prove the following result, which suggests that all DistNP-complete problems have reasonable distributions. If NP contains a DTIME(2 n)-bi-immune set, then every DistNP-complete set has a reasonable distribution. It follows from work of Mayordomo [May94] that the consequent holds if the p-measure of NP is not zero. Cai and Selman [CS96] defined a modification and extension of Levin’s notion of average polynomial time to arbitrary time-bounds and proved that if L is P-bi-immune, then L is distributionally hard, meaning, that for every polynomial-time computable distribution µ, the distributional problem (L, µ) is not polynomial on the µ-average. We prove the following results, which suggest that distributional hardness is closely related to more traditional notions of hardness. 1. If NP contains a distributionally hard set, then NP contains a P-immune set. 2. There exists a language L that is distributionally hard but not P-bi-immune if and only if P contains a set that is immune to all P-printable sets. The following corollaries follow readily 1. If the p-measure of NP is not zero, then there exists a language L that is distributionally hard but not P-bi-immune. 2. If the p2-measure of NP is not zero, then there exists a language L in NP that is distributionally hard but not P-bi-immune. 1
Truth-table closure and Turing closure of average polynomial time have different measures in EXP
- In Proceedings of the Eleventh Annual IEEE Conference on Computational Complexity
, 1996
"... Let PP-comp denote the sets that are solvable in polynomial time on average under every polynomialtime computable distribution on the instances. In this paper we show that the truth-table closure of PP-comp has measure 0 in EXP. Since, as we show, EXP is Turing reducible to PP-comp , the Turing clo ..."
Abstract
-
Cited by 5 (2 self)
- Add to MetaCart
(Show Context)
Let PP-comp denote the sets that are solvable in polynomial time on average under every polynomialtime computable distribution on the instances. In this paper we show that the truth-table closure of PP-comp has measure 0 in EXP. Since, as we show, EXP is Turing reducible to PP-comp , the Turing closure has measure 1 in EXP and thus, PP-comp is an example of a subclass of E such that the closure under truth-table reduction and the closure under Turing reduction have different measures in EXP. Furthermore, it is shown that there exists a set A in PP-comp such that for every k, the class of sets L such that A is k-truth-table reducible to L has measure 0 in EXP. 1 Introduction A randomized problem (or distributional problem) is a pair consisting of a decision problem and a density function. A randomized decision problem (A; ¯) is solvable in average polynomial time ((A; ¯) is in AP) if there exists a deterministic Turing machine M such that A = L(M ) and TimeM , the running time of M ...
Using Depth to Capture Average-Case Complexity
, 2003
"... We give the rst characterization of Turing machines that run in polynomial-time on average. We show that a Turing machine M runs in average polynomial-time if for all inputs x the Turing machine uses time exponential in the computational depth of x, where the computational depth is a measure of ..."
Abstract
-
Cited by 4 (4 self)
- Add to MetaCart
We give the rst characterization of Turing machines that run in polynomial-time on average. We show that a Turing machine M runs in average polynomial-time if for all inputs x the Turing machine uses time exponential in the computational depth of x, where the computational depth is a measure of the amount of \useful" information in x.
Efficient kernels for sentence pair classification
"... In this paper, we propose a novel class of graphs, the tripartite directed acyclic graphs (tDAGs), to model first-order rule feature spaces for sentence pair classification. We introduce a novel algorithm for computing the similarity in first-order rewrite rule feature spaces. Our algorithm is extre ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
(Show Context)
In this paper, we propose a novel class of graphs, the tripartite directed acyclic graphs (tDAGs), to model first-order rule feature spaces for sentence pair classification. We introduce a novel algorithm for computing the similarity in first-order rewrite rule feature spaces. Our algorithm is extremely efficient and, as it computes the similarity of instances that can be represented in explicit feature spaces, it is a valid kernel function. 1
Complexity Classes
, 1998
"... This material was written for Chapter 27 of the CRC Handbook of Algorithms and Theory of Computation, edited by Mikhail Atallah. ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
This material was written for Chapter 27 of the CRC Handbook of Algorithms and Theory of Computation, edited by Mikhail Atallah.