Results 1  10
of
28
Backdoors to typical case complexity
, 2003
"... There has been significant recent progress in reasoning and constraint processing methods. In areas such as planning and finite modelchecking, current solution techniques can handle combinatorial problems with up to a million variables and five million constraints. The good scaling behavior of thes ..."
Abstract

Cited by 122 (14 self)
 Add to MetaCart
There has been significant recent progress in reasoning and constraint processing methods. In areas such as planning and finite modelchecking, current solution techniques can handle combinatorial problems with up to a million variables and five million constraints. The good scaling behavior of these methods appears to defy what one would expect based on a worstcase complexity analysis. In order to bridge this gap between theory and practice, we propose a new framework for studying the complexity of these techniques on practical problem instances. In particular, our approach incorporates general structural properties observed in practical problem instances into the formal complexity
Satisfiability Solvers
, 2008
"... The past few years have seen an enormous progress in the performance of Boolean satisfiability (SAT) solvers. Despite the worstcase exponential run time of all known algorithms, satisfiability solvers are increasingly leaving their mark as a generalpurpose tool in areas as diverse as software and h ..."
Abstract

Cited by 48 (0 self)
 Add to MetaCart
The past few years have seen an enormous progress in the performance of Boolean satisfiability (SAT) solvers. Despite the worstcase exponential run time of all known algorithms, satisfiability solvers are increasingly leaving their mark as a generalpurpose tool in areas as diverse as software and hardware verification [29–31, 228], automatic test pattern generation [138, 221], planning [129, 197], scheduling [103], and even challenging problems from algebra [238]. Annual SAT competitions have led to the development of dozens of clever implementations of such solvers [e.g. 13,
Restart Policies with Dependence among Runs: A Dynamic Programming Approach
, 2002
"... The time required for a backtracking search procedure to solve a problem can be reduced by employing randomized restart procedures. To date, ..."
Abstract

Cited by 32 (5 self)
 Add to MetaCart
The time required for a backtracking search procedure to solve a problem can be reduced by employing randomized restart procedures. To date,
Tradeoffs in the complexity of backdoor detection
 In Principles and Practice of Constraint Programming  CP 2007
, 2007
"... Abstract. There has been considerable interest in the identification of structural properties of combinatorial problems that lead to efficient algorithms for solving them. Some of these properties are “easily ” identifiable, while others are of interest because they capture key aspects of stateoft ..."
Abstract

Cited by 23 (5 self)
 Add to MetaCart
(Show Context)
Abstract. There has been considerable interest in the identification of structural properties of combinatorial problems that lead to efficient algorithms for solving them. Some of these properties are “easily ” identifiable, while others are of interest because they capture key aspects of stateoftheart constraint solvers. In particular, it was recently shown that the problem of identifying a strong Horn or 2CNFbackdoor can be solved by exploiting equivalence with deletion backdoors, and is NPcomplete. We prove that strong backdoor identification becomes harder than NP (unless NP=coNP) as soon as the inconsequential sounding feature of empty clause detection (present in all modern SAT solvers) is added. More interestingly, in practice such a feature as well as polynomial time constraint propagation mechanisms often lead to much smaller backdoor sets. In fact, despite the worstcase complexity results for strong backdoor detection, we show that SatzRand is remarkably good at finding small strong backdoors on a range of experimental domains. Our results suggest that structural notions explored for designing efficient algorithms for combinatorial problems should capture both statically and dynamically identifiable properties. 1
Backtracking Search Algorithms
, 2006
"... There are three main algorithmic techniques for solving constraint satisfaction problems: backtracking search, local search, and dynamic programming. In this chapter, I survey backtracking search algorithms. Algorithms based on dynamic programming [15]— sometimes referred to in the literature as var ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
There are three main algorithmic techniques for solving constraint satisfaction problems: backtracking search, local search, and dynamic programming. In this chapter, I survey backtracking search algorithms. Algorithms based on dynamic programming [15]— sometimes referred to in the literature as variable elimination, synthesis, or inference algorithms—are the topic of Chapter 7. Local or stochastic search algorithms are the topic of Chapter 5. An algorithm for solving a constraint satisfaction problem (CSP) can be either complete or incomplete. Complete, or systematic algorithms, come with a guarantee that a solution will be found if one exists, and can be used to show that a CSP does not have a solution and to find a provably optimal solution. Backtracking search algorithms and dynamic programming algorithms are, in general, examples of complete algorithms. Incomplete, or nonsystematic algorithms, cannot be used to show a CSP does not have a solution or to find a provably optimal solution. However, such algorithms are often effective at finding a solution if one exists and can be used to find an approximation to an optimal solution. Local or stochastic search algorithms are examples of incomplete algorithms. Of the two
The State of SAT
, 2005
"... The papers in this special issue originated at SAT 2001, the Fourth International Symposium on the Theory and Applications of Satisfiability Testing. This foreword reviews the current state of satisfiability testing and places the papers in this issue in context. Key words: Boolean satisfiability, c ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
The papers in this special issue originated at SAT 2001, the Fourth International Symposium on the Theory and Applications of Satisfiability Testing. This foreword reviews the current state of satisfiability testing and places the papers in this issue in context. Key words: Boolean satisfiability, complexity, challenge problems.
Ten challenges redux: Recent progress in propositional reasoning and search
 In Proceedings of CP ’03
, 2003
"... Abstract. In 1997 we presented ten challenges for research on satisfiability testing [1]. In this paper we review recent progress towards each of these challenges, including our own work on the power of clause learning and randomized restart policies. 1 ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
Abstract. In 1997 we presented ten challenges for research on satisfiability testing [1]. In this paper we review recent progress towards each of these challenges, including our own work on the power of clause learning and randomized restart policies. 1
On the Connections Between Backdoors, Restarts, and HeavyTailedness in Combinatorial Search
, 2003
"... Recent stateoftheart SAT solvers can handle handcrafted instances with hundreds of thousands of variables and several million clauses. Only a few years ago, the ability to handle such instances appeared completely out of reach. The most effective complete solvers are generally based on DavisPut ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
Recent stateoftheart SAT solvers can handle handcrafted instances with hundreds of thousands of variables and several million clauses. Only a few years ago, the ability to handle such instances appeared completely out of reach. The most effective complete solvers are generally based on DavisPutnamLovelandLogemann style search procedures augmented with a number of special techniques, such as clauselearning, nonchronological backtracking, loolahead, fast unitpropagation, randomization, and restart strategies. The progress in this area has largely been driven by experimental work on diverse sets of benchmark problems, including regular SAT competions. One key open area of research is to obtain a better understanding as to why these methods work so well. In this paper, we hope to obtain advance our understanding of the effectivness of current techniques and analyze what features of practical instances makes them so amenable to these solution methods. Of the many enhancements of DPLL, we will focus our attention on the interplay between certain special features of problem instances, polytime propagation methods, and restart techniques. This analysys is clearly only part of the full story, since other enhancements, such as clause learnings and nonchronological backtracking, provide additional power to these solvers.
Boosting Stochastic Problem Solvers through Online SelfAnalysis of Performance
, 2003
"... In many combinatorial domains, simple stochastic algorithms often exhibit superior performance when compared to highly customized approaches. Many of these simple algorithms outperform more sophisticated approaches on difficult benchmark problems; and often lead to better solutions as the algorithms ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
In many combinatorial domains, simple stochastic algorithms often exhibit superior performance when compared to highly customized approaches. Many of these simple algorithms outperform more sophisticated approaches on difficult benchmark problems; and often lead to better solutions as the algorithms are taken out of the world of benchmarks and into the realworld. Simple stochastic algorithms are often robust, scalable problem solvers.
How much backtracking does it take to color random graphs? Rigorous results on heavy tails
 Proceedings of 10th International Conference on Principles and Practice of Constraint Programming (CP2004
, 2004
"... Abstract. For many backtracking search algorithms, the running time has been found experimentally to have a heavytailed distribution, in which it is often much greater than its median. We analyze two natural variants of the DavisPutnamLogemannLoveland (DPLL) algorithm for Graph 3Coloring on spa ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
Abstract. For many backtracking search algorithms, the running time has been found experimentally to have a heavytailed distribution, in which it is often much greater than its median. We analyze two natural variants of the DavisPutnamLogemannLoveland (DPLL) algorithm for Graph 3Coloring on sparse random graphs of average degree c. Let Pc(b) be the probability that DPLL backtracks b times. First, we calculate analytically the probability Pc(0) that these algorithms find a 3coloring with no backtracking at all, and show that it goes to zero faster than any analytic function as c → c ∗ = 3.847... Then we show that even in the “easy ” regime 1 < c < c ∗ where Pc(0)> 0 — including just above the degree c = 1 where the giant component first appears — the expected number of backtrackings is exponentially large with positive probability. To our knowledge this is the first rigorous proof that the running time of a natural backtracking algorithm has a heavy tail for graph coloring. 1