Results 1  10
of
114
Which Problems Have Strongly Exponential Complexity?
 Journal of Computer and System Sciences
, 1998
"... For several NPcomplete problems, there have been a progression of better but still exponential algorithms. In this paper, we address the relative likelihood of subexponential algorithms for these problems. We introduce a generalized reduction which we call SubExponential Reduction Family (SERF) t ..."
Abstract

Cited by 249 (9 self)
 Add to MetaCart
(Show Context)
For several NPcomplete problems, there have been a progression of better but still exponential algorithms. In this paper, we address the relative likelihood of subexponential algorithms for these problems. We introduce a generalized reduction which we call SubExponential Reduction Family (SERF) that preserves subexponential complexity. We show that CircuitSAT is SERFcomplete for all NPsearch problems, and that for any fixed k, kSAT, kColorability, kSet Cover, Independent Set, Clique, Vertex Cover, are SERFcomplete for the class SNP of search problems expressible by second order existential formulas whose first order part is universal. In particular, subexponential complexity for any one of the above problems implies the same for all others. We also look at the issue of proving strongly exponential lower bounds for AC 0 ; that is, bounds of the form 2 \Omega\Gamma n) . This problem is even open for depth3 circuits. In fact, such a bound for depth3 circuits with even l...
Derandomizing Polynomial Identity Tests Means Proving Circuit Lower Bounds (Extended Abstract)
, 2003
"... Since Polynomial Identity Testing is a coRP problem, we obtain the following corollary: If RP = P (or, even, coRP ` "ffl?0NTIME(2nffl), infinitely often), then NEXP is not computable by polynomialsize arithmetic circuits. Thus, establishing that RP = coRP or BPP = P would require proving s ..."
Abstract

Cited by 187 (4 self)
 Add to MetaCart
Since Polynomial Identity Testing is a coRP problem, we obtain the following corollary: If RP = P (or, even, coRP ` &quot;ffl?0NTIME(2nffl), infinitely often), then NEXP is not computable by polynomialsize arithmetic circuits. Thus, establishing that RP = coRP or BPP = P would require proving superpolynomial lower bounds for Boolean or arithmetic circuits. We also show that any derandomization of RNC would yield new circuit lower bounds for a language in NEXP.
On the Complexity of kSAT
, 2001
"... The kSAT problem is to determine if a given kCNF has a satisfying assignment. It is a celebrated open question as to whether it requires exponential time to solve kSAT for k 3. Here exponential time means 2 $n for some $>0. In this paper, assuming that, for k 3, kSAT requires exponential time ..."
Abstract

Cited by 110 (8 self)
 Add to MetaCart
The kSAT problem is to determine if a given kCNF has a satisfying assignment. It is a celebrated open question as to whether it requires exponential time to solve kSAT for k 3. Here exponential time means 2 $n for some $>0. In this paper, assuming that, for k 3, kSAT requires exponential time complexity, we show that the complexity of kSAT increases as k increases. More precisely, for k 3, define s k=inf[$: there exists 2 $n algorithm for solving kSAT]. Define ETH (ExponentialTime Hypothesis) for kSAT as follows: for k 3, s k>0. In this paper, we show that s k is increasing infinitely often assuming ETH for kSAT. Let s be the limit of s k. We will in fact show that s k (1&d k) s for some constant d>0. We prove this result by bringing together the ideas of critical clauses and the Sparsification Lemma to reduce the satisfiability of a kCNF to the satisfiability of a disjunction of 2 =n k$CNFs in fewer variables for some k $ k and arbitrarily small =>0. We also show that such a disjunction can be computed in time 2 =n for arbitrarily small =>0.
UnitWalk: A new SAT solver that uses local search guided by unit clause elimination
, 2002
"... In this paper we present a new randomized algorithm for SAT, i.e., the satisfiability problem for Boolean formulas in conjunctive normal form. Despite its simplicity, this algorithm performs well on many common benchmarks ranging from graph coloring problems to microprocessor verification. ..."
Abstract

Cited by 69 (1 self)
 Add to MetaCart
(Show Context)
In this paper we present a new randomized algorithm for SAT, i.e., the satisfiability problem for Boolean formulas in conjunctive normal form. Despite its simplicity, this algorithm performs well on many common benchmarks ranging from graph coloring problems to microprocessor verification.
Measure and conquer: domination  a case study
 PROCEEDINGS OF THE 32ND INTERNATIONAL COLLOQUIUM ON AUTOMATA, LANGUAGES AND PROGRAMMING (ICALP 2005), SPRINGER LNCS
, 2005
"... DavisPutnamstyle exponentialtime backtracking algorithms are the most common algorithms used for finding exact solutions of NPhard problems. The analysis of such recursive algorithms is based on the bounded search tree technique: a measure of the size of the subproblems is defined; this measure ..."
Abstract

Cited by 59 (21 self)
 Add to MetaCart
DavisPutnamstyle exponentialtime backtracking algorithms are the most common algorithms used for finding exact solutions of NPhard problems. The analysis of such recursive algorithms is based on the bounded search tree technique: a measure of the size of the subproblems is defined; this measure is used to lower bound the progress made by the algorithm at each branching step. For the last 30 years the research on exact algorithms has been mainly focused on the design of more and more sophisticated algorithms. However, measures used in the analysis of backtracking algorithms are usually very simple. In this paper we stress that a more careful choice of the measure can lead to significantly better worst case time analysis. As an example, we consider the minimum dominating set problem. The currently fastest algorithm for this problem has running time O(2 0.850n) on nnodes graphs. By measuring the progress of the (same) algorithm in a different way, we refine the time bound to O(2 0.598n). A good choice of the measure can provide such a (surprisingly big) improvement; this suggests that the running time of many other exponentialtime recursive algorithms is largely overestimated because of a “bad” choice of the measure.
A measure & conquer approach for the analysis of exact algorithms
, 2007
"... For more than 40 years Branch & Reduce exponentialtime backtracking algorithms have been among the most common tools used for finding exact solutions of NPhard problems. Despite that, the way to analyze such recursive algorithms is still far from producing tight worstcase running time bounds. ..."
Abstract

Cited by 51 (11 self)
 Add to MetaCart
For more than 40 years Branch & Reduce exponentialtime backtracking algorithms have been among the most common tools used for finding exact solutions of NPhard problems. Despite that, the way to analyze such recursive algorithms is still far from producing tight worstcase running time bounds. Motivated by this we use an approach, that we call “Measure & Conquer”, as an attempt to step beyond such limitations. The approach is based on the careful design of a nonstandard measure of the subproblem size; this measure is then used to lower bound the progress made by the algorithm at each branching step. The idea is that a smarter measure may capture behaviors of the algorithm that a standard measure might not be able to exploit, and hence lead to a significantly better worstcase time analysis. In order to show the potentialities of Measure & Conquer, we consider two wellstudied NPhard problems: minimum dominating set and maximum independent set. For the first problem, we consider the current best algorithm, and prove (thanks to a better measure) a much tighter running time bound for it. For the second problem, we describe a new, simple algorithm, and show that its running time is competitive with the current best time bounds, achieved with far more complicated algorithms (and standard analysis). Our examples
Improved upper bounds for 3sat
 In 15th ACMSIAM Symposium on Discrete Algorithms (SODA 2004). ACM and SIAM
"... The CNF Satisfiability problem is to determine, given a CNF formula F, whether or not there exists a satisfying assignment for F. If each clause of F contains at most k literals, then F is called a kCNF formula and the problem is called kSAT. For small k’s, especially for k = 3, there exists a lot ..."
Abstract

Cited by 48 (3 self)
 Add to MetaCart
(Show Context)
The CNF Satisfiability problem is to determine, given a CNF formula F, whether or not there exists a satisfying assignment for F. If each clause of F contains at most k literals, then F is called a kCNF formula and the problem is called kSAT. For small k’s, especially for k = 3, there exists a lot of algorithms which run significantly faster than the trivial 2n bound. The following list summarizes those algorithms where a constant c means that the algorithm runs in time O(cn). Roughly speaking most algorithms are based on DavisPutnam. [Sch99] is the first local search algorithm which gives a guaranteed performance for general instances and [DGH+02], [HSSW02], [BS03] and [Rol03] follow up this Schöning’s approach. 3SAT 4SAT 5SAT 6SAT type ref. 1.782 1.835 1.867 1.888 det. [PPZ97]
Improved Algorithms for 3Coloring, 3EdgeColoring, and Constraint Satisfaction
, 2001
"... We consider worst case time bounds for NPcomplete problems including 3SAT, 3coloring, 3edgecoloring, and 3list coloring. Our algorithms are based on a constraint satisfaction (CSP) formulation of these problems; 3SAT is equivalent to (2, 3)CSP while the other problems above are special cases ..."
Abstract

Cited by 48 (3 self)
 Add to MetaCart
We consider worst case time bounds for NPcomplete problems including 3SAT, 3coloring, 3edgecoloring, and 3list coloring. Our algorithms are based on a constraint satisfaction (CSP) formulation of these problems; 3SAT is equivalent to (2, 3)CSP while the other problems above are special cases of (3, 2)CSP. We give a fast algorithm for (3, 2) CSP and use it to improve the time bounds for solving the other problems listed above. Our techniques involve a mixture of DavisPutnamstyle backtracking with more sophisticated matching and network flow based ideas.
Upper Bounds for Vertex Cover Further Improved
, 1998
"... The problem instance of Vertex Cover consists of an undirected graph G = (V, E) and a positive integer k, the question is whether there exists a subset C V of vertices such that each edge in E has at least one of its endpoints in C with jCj k. We improve two recent worst case upper bounds for ..."
Abstract

Cited by 45 (15 self)
 Add to MetaCart
(Show Context)
The problem instance of Vertex Cover consists of an undirected graph G = (V, E) and a positive integer k, the question is whether there exists a subset C V of vertices such that each edge in E has at least one of its endpoints in C with jCj k. We improve two recent worst case upper bounds for Vertex Cover. First, Balasubramanian et al. showed that Vertex Cover can be solved in time O(kn + 1:32472 k k²), where n is the number of vertices in G. Afterwards, Downey et al. improved this to O(kn+ 1:31951 k k 2 ). Bringing the exponential base significantly below 1.3, we present the new upper bound O(kn + 1.29175 k k²).
A new algorithm for optimal 2constraint satisfaction and its implications
 Theoretical Computer Science
, 2005
"... Abstract. We present a novel method for exactly solving (in fact, counting solutions to) general constraint satisfaction optimization with at most two variables per constraint (e.g. MAX2CSP and MIN2CSP), which gives the first exponential improvement over the trivial algorithm. More precisely, th ..."
Abstract

Cited by 40 (6 self)
 Add to MetaCart
(Show Context)
Abstract. We present a novel method for exactly solving (in fact, counting solutions to) general constraint satisfaction optimization with at most two variables per constraint (e.g. MAX2CSP and MIN2CSP), which gives the first exponential improvement over the trivial algorithm. More precisely, the runtime bound is a constant factor improvement in the base of the exponent: the algorithm can count the number of optima in MAX2SAT and MAXCUT instances in O(m 3 2 ωn/3) time, where ω < 2.376 is the matrix product exponent over a ring. When constraints have arbitrary weights, there is a (1 + ɛ)approximation with roughly the same runtime, modulo polynomial factors. Our construction shows that improvement in the runtime exponent of either kclique solution (even when k = 3) or matrix multiplication over GF(2) would improve the runtime exponent for solving 2CSP optimization. Our approach also yields connections between the complexity of some (polynomial time) high dimensional search problems and some NPhard problems. For example, if there are sufficiently faster algorithms for computing the diameter of n points in ℓ1, then there is an (2 − ɛ) n algorithm for MAXLIN. These results may be construed as either lower bounds on the highdimensional problems, or hope that better algorithms exist for the corresponding hard problems. 1