Results 1  10
of
15
Satisfiability Solvers
, 2008
"... The past few years have seen an enormous progress in the performance of Boolean satisfiability (SAT) solvers. Despite the worstcase exponential run time of all known algorithms, satisfiability solvers are increasingly leaving their mark as a generalpurpose tool in areas as diverse as software and h ..."
Abstract

Cited by 48 (0 self)
 Add to MetaCart
The past few years have seen an enormous progress in the performance of Boolean satisfiability (SAT) solvers. Despite the worstcase exponential run time of all known algorithms, satisfiability solvers are increasingly leaving their mark as a generalpurpose tool in areas as diverse as software and hardware verification [29–31, 228], automatic test pattern generation [138, 221], planning [129, 197], scheduling [103], and even challenging problems from algebra [238]. Annual SAT competitions have led to the development of dozens of clever implementations of such solvers [e.g. 13,
Detecting backdoor sets with respect to horn and binary clauses
 In SAT’04
, 2004
"... Abstract. We study the parameterized complexity of detecting backdoor sets for instances of the propositional satisfiability problem (SAT) with respect to the polynomially solvable classes horn and 2cnf. A backdoor set is a subset of variables; for a strong backdoor set, the simplified formulas res ..."
Abstract

Cited by 41 (14 self)
 Add to MetaCart
(Show Context)
Abstract. We study the parameterized complexity of detecting backdoor sets for instances of the propositional satisfiability problem (SAT) with respect to the polynomially solvable classes horn and 2cnf. A backdoor set is a subset of variables; for a strong backdoor set, the simplified formulas resulting from any setting of these variables is in a polynomially solvable class, and for a weak backdoor set, there exists one setting which puts the satisfiable simplified formula in the class. We show that with respect to both horn and 2cnf classes, the detection of a strong backdoor set is fixedparameter tractable (the existence of a set of size k for a formula of length N can be decided in time f(k)N O(1)), but that the detection of a weak backdoor set is W[2]hard, implying that this problem is not fixedparameter tractable. 1
The backdoor key: A path to understanding problem hardness
 In AAAI
, 2004
"... We introduce our work on the backdoor key, a concept that shows promise for characterizing problem hardness in backtracking search algorithms. The general notion of backdoors was recently introduced to explain the source of heavytailed behaviors in backtracking algorithms (Williams, Gomes, & S ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
We introduce our work on the backdoor key, a concept that shows promise for characterizing problem hardness in backtracking search algorithms. The general notion of backdoors was recently introduced to explain the source of heavytailed behaviors in backtracking algorithms (Williams, Gomes, & Selman 2003a; 2003b). We describe empirical studies that show that the key faction,i.e., the ratio of the key size to the corresponding backdoor size, is a good predictor of problem hardness of ensembles and individual instances within an ensemble for structure domains with large key fraction.
Solving #SAT Using Vertex Covers
, 2006
"... We propose an exact algorithm for counting the models of propositional formulas in conjunctive normal form (CNF). Our algorithm is based on the detection of strong backdoor sets of bounded size; each instantiation of the variables of a strong backdoor set puts the given formula into a class of form ..."
Abstract

Cited by 21 (10 self)
 Add to MetaCart
We propose an exact algorithm for counting the models of propositional formulas in conjunctive normal form (CNF). Our algorithm is based on the detection of strong backdoor sets of bounded size; each instantiation of the variables of a strong backdoor set puts the given formula into a class of formulas for which models can be counted in polynomial time. For the backdoor set detection we utilize an efficient vertex cover algorithm applied to a certain “obstruction graph ” that we associate with the given formula. This approach gives rise to a new hardness index for formulas, the clusteringwidth. Our algorithm runs in uniform polynomial time on formulas with bounded clusteringwidth. It is known that the number of models of formulas with bounded cliquewidth, bounded treewidth, or bounded branchwidth can be computed in polynomial time; these graph parameters are applied to formulas via certain (hyper)graphs associated with formulas. We show that clusteringwidth and the other parameters mentioned are incomparable: there are formulas with bounded clusteringwidth and arbitrarily large cliquewidth, treewidth, and branchwidth. Conversely, there are formulas with arbitrarily large clusteringwidth and bounded cliquewidth, treewidth, and branchwidth.
Backdoor sets of quantified Boolean formulas
 In Proc. 10th Int. Conf. on Theory and Applications of Satisfiability Testing (SAT’07), volume 4501 of LNCS
, 2007
"... Abstract We generalize the notion of backdoor sets from propositional formulas to quantified Boolean formulas (QBF). This allows us to obtain hierarchies of tractable classes of quantified Boolean formulas with the classes of quantified Horn and quantified 2CNF formulas, respectively, at their first ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
(Show Context)
Abstract We generalize the notion of backdoor sets from propositional formulas to quantified Boolean formulas (QBF). This allows us to obtain hierarchies of tractable classes of quantified Boolean formulas with the classes of quantified Horn and quantified 2CNF formulas, respectively, at their first level, thus gradually generalizing these two important tractable classes. In contrast to known tractable classes based on bounded treewidth, the number of quantifier alternations of our classes is unbounded. As a side product of our considerations we develop a theory of variable dependency which is of independent interest.
Matched Formulas and Backdoor Sets
, 2008
"... We demonstrate hardness results for the detection of small backdoor sets with respect to base classes Mr of CNF formulas with maximum deficiency ≤ r (M0 is the class of matched formulas). One of the results applies also to a wide range of base classes with added ‘empty clause detection ’ as recently ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
(Show Context)
We demonstrate hardness results for the detection of small backdoor sets with respect to base classes Mr of CNF formulas with maximum deficiency ≤ r (M0 is the class of matched formulas). One of the results applies also to a wide range of base classes with added ‘empty clause detection ’ as recently considered by Dilkina, Gomes, and Sabharwal. We obtain the hardness results in the framework of parameterized complexity, considering the upper bound on the size of smallest backdoor sets as the parameter. Furthermore we compare the generality of the parameters maximum deficiency and the size of a smallest Mrbackdoor set.
Randomness and Structure
"... This chapter covers research in constraint programming (CP) and related areas involving random problems. Such research has played a significant role in the development of more efficient and effective algorithms, as well as in understanding the source of hardness in solving combinatorially challengin ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
This chapter covers research in constraint programming (CP) and related areas involving random problems. Such research has played a significant role in the development of more efficient and effective algorithms, as well as in understanding the source of hardness in solving combinatorially challenging problems. Random problems have proved useful in a number of different ways. Firstly, they provide a relatively “unbiased ” sample for benchmarking algorithms. In the early days of CP, many algorithms were compared using only a limited sample of problem instances. In some cases, this may have lead to premature conclusions. Random problems, by comparison, permit algorithms to be tested on statistically significant samples of hard problems. However, as we outline in the rest of this chapter, there remain pitfalls waiting the unwary in their use. For example, random problems may not contain structures found in many real world problems, and these structures can make problems much easier or much harder to solve. As a second example, the process of generating random problems may itself be “flawed”, giving problem instances which are not, at least asymptotically, combinatorially hard. Random problems have also provided insight into problem hardness. For example, the influential paper by Cheeseman, Kanefsky and Taylor [12] highlighted the computational difficulty of problems which are on the “knifeedge ” between satisfiability and unsatisfiability [84]. There is even hope within certain quarters that random problems may be one of the links in resolving the P=NP question. Finally, insight into problem hardness provided by random problems has helped inform the design of better algorithms and heuristics. For example, the design of a number of branching heuristics for the Davis Logemann Loveland satisfiability (DPLL) procedure has been heavily influenced by the hardness of random problems. As a second example, the rapid randomization and restart (RRR) strategy [45, 44] was motivated by the discovery of heavytailed runtime distributions in backtracking style search procedures on random quasigroup completion problems.
Randomization in Constraint Programming for Airline Planning
 PRINCIPLES AND PRACTICE OF CONSTRAINT PROGRAMMING CP2006
, 2006
"... We extend the common depthfirst backtrack search for constraint satisfaction problems with randomized variable and value selection. The resulting methods are applied to realworld instances of the tail assignment problem, a certain kind of airline planning problem. We analyze the performance impact ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We extend the common depthfirst backtrack search for constraint satisfaction problems with randomized variable and value selection. The resulting methods are applied to realworld instances of the tail assignment problem, a certain kind of airline planning problem. We analyze the performance impact of these extensions and, in order to exploit the improvements, add restarts to the search procedure. Finally computational results of the complete approach are discussed.
2010 People efficiently explore the solution space of the computationally intractable traveling salesman problem to find nearoptimal tours
 PLoS ONE 5, e11685. (doi:10.1371/journal.pone.0011685) on January 2, 2015http://rsos.royalsocietypublishing.org/Downloaded from rsos.royalsocietypublishing.org R.Soc.opensci.1:140211
"... Humans need to solve computationally intractable problems such as visual search, categorization, and simultaneous learning and acting, yet an increasing body of evidence suggests that their solutions to instantiations of these problems are near optimal. Computational complexity advances an explanati ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Humans need to solve computationally intractable problems such as visual search, categorization, and simultaneous learning and acting, yet an increasing body of evidence suggests that their solutions to instantiations of these problems are near optimal. Computational complexity advances an explanation to this apparent paradox: (1) only a small portion of instances of such problems are actually hard, and (2) successful heuristics exploit structural properties of the typical instance to selectively improve parts that are likely to be suboptimal. We hypothesize that these two ideas largely account for the good performance of humans on computationally hard problems. We tested part of this hypothesis by studying the solutions of 28 participants to 28 instances of the Euclidean Traveling Salesman Problem (TSP). Participants were provided feedback on the cost of their solutions and were allowed unlimited solution attempts (trials). We found a significant improvement between the first and last trials and that solutions are significantly different from random tours that follow the convex hull and do not have selfcrossings. More importantly, we found that participants modified their current better solutions in such a way that edges belonging to the optimal solution (‘‘good’ ’ edges) were significantly more likely to stay than other edges (‘‘bad’ ’ edges), a hallmark of structural exploitation. We found, however, that more trials harmed the participants ’ ability to tell good from bad edges, suggesting that after too many trials the participants ‘‘ran out of ideas.’ ’ In sum, we provide the first demonstration of significant performance improvement on the TSP under repetition and feedback
Limits of preprocessing
 In Proceedings of the TwentyFifth Conference on Artificial Intelligence, AAAI 2011
, 2011
"... We present a first theoretical analysis of the power of polynomialtime preprocessing for important combinatorial problems from various areas in AI. We consider problems from Constraint Satisfaction, Global Constraints, Satisfiability, Nonmonotonic and Bayesian Reasoning. We show that, subject to a ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We present a first theoretical analysis of the power of polynomialtime preprocessing for important combinatorial problems from various areas in AI. We consider problems from Constraint Satisfaction, Global Constraints, Satisfiability, Nonmonotonic and Bayesian Reasoning. We show that, subject to a complexity theoretic assumption, none of the considered problems can be reduced by polynomialtime preprocessing to a problem kernel whose size is polynomial in a structural problem parameter of the input, such as induced width or backdoor size. Our results provide a firm theoretical boundary for the performance of polynomialtime preprocessing algorithms for the considered problems.