Results 1  10
of
16
HeavyTailed Phenomena in Satisfiability and Constraint Satisfaction Problems
 J. of Autom. Reasoning
, 2000
"... Abstract. We study the runtime distributions of backtrack procedures for propositional satisfiability and constraint satisfaction. Such procedures often exhibit a large variability in performance. Our study reveals some intriguing properties of such distributions: They are often characterized by ver ..."
Abstract

Cited by 165 (27 self)
 Add to MetaCart
(Show Context)
Abstract. We study the runtime distributions of backtrack procedures for propositional satisfiability and constraint satisfaction. Such procedures often exhibit a large variability in performance. Our study reveals some intriguing properties of such distributions: They are often characterized by very long tails or “heavy tails”. We will show that these distributions are best characterized by a general class of distributions that can have infinite moments (i.e., an infinite mean, variance, etc.). Such nonstandard distributions have recently been observed in areas as diverse as economics, statistical physics, and geophysics. They are closely related to fractal phenomena, whose study was introduced by Mandelbrot. We also show how random restarts can effectively eliminate heavytailed behavior. Furthermore, for harder problem instances, we observe long tails on the lefthand side of the distribution, which is indicative of a nonnegligible fraction of relatively short, successful runs. A rapid restart strategy eliminates heavytailed behavior and takes advantage of short runs, significantly reducing expected solution time. We demonstrate speedups of up to two orders of magnitude on SAT and CSP encodings of hard problems in planning, scheduling, and circuit synthesis. Key words: satisfiability, constraint satisfaction, heavy tails, backtracking 1.
Planning as Satisfiability: Parallel Plans and Algorithms for Plan Search
 ARTIF. INTELL
, 2005
"... ..."
A framework for structured quantum search
 Physica D
, 1998
"... A quantum algorithm for general combinatorial search that uses the underlying structure of the search space to increase the probability of finding a solution is presented. This algorithm shows how coherent quantum systems can be matched to the underlying structure of abstract search spaces. The algo ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
A quantum algorithm for general combinatorial search that uses the underlying structure of the search space to increase the probability of finding a solution is presented. This algorithm shows how coherent quantum systems can be matched to the underlying structure of abstract search spaces. The algorithm is evaluated empirically with a variety of search problems, and shown to be particularly effective for searches with many constraints. Furthermore, the algorithm provides a simple framework for utilizing search heuristics. It also exhibits the same phase transition in search difficulty as found for sophisticated classical search methods, indicating that it is effectively using the problem structure. © 1998 Elsevier Science B.V. All rights reserved.
Multiagent Cooperative Search for Portfolio Selection
, 2001
"... this paper because we assume throughout that the total initial wealth of all systems of agents is $1 ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
this paper because we assume throughout that the total initial wealth of all systems of agents is $1
Local search methods for quantum computers.” quantph/9802043
"... Local search algorithms use the neighborhood relations among search states and often perform well for a variety of NPhard combinatorial search problems. This paper shows how quantum computers can also use these neighborhood relations. An example of such a local quantum search is evaluated empirical ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
Local search algorithms use the neighborhood relations among search states and often perform well for a variety of NPhard combinatorial search problems. This paper shows how quantum computers can also use these neighborhood relations. An example of such a local quantum search is evaluated empirically for the satisfiability (SAT) problem and shown to be particularly effective for highly constrained instances. For problems with an intermediate number of constraints, it is somewhat less effective at exploiting problem structure than incremental quantum methods, in spite of the much smaller search space used by the local method. 1
Partitioning SAT Instances for Distributed Solving
"... Abstract. In this paper we study the problem of solving hard propositional satisfiability problem (SAT) instances in a computing grid or cloud, where run times and communication between parallel running computations are limited. We study analytically an approach where the instance is partitioned ite ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we study the problem of solving hard propositional satisfiability problem (SAT) instances in a computing grid or cloud, where run times and communication between parallel running computations are limited. We study analytically an approach where the instance is partitioned iteratively into a tree of subproblems and each node in the tree is solved in parallel. We present new methods which combine clause learning and lookahead to construct partitions, evaluate their efficiency experimentally, and finally demonstrate the power of the approach in a real grid environment by solving several instances that were not solved in a SAT solver competition. 1
Optimal Schedules for Parallelizing Anytime Algorithms: The Case of Independent Processes
 In Proceedings of the Eighteenth National Conference on Artificial Intelligence
, 2002
"... The performance of anytime algorithms having a nondeterministic nature can be improved by solving simultaneously several instances of the algorithmproblem pairs. These pairs may include different instances of a problem (like starting from a different initial state), different algorithms (if several ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
The performance of anytime algorithms having a nondeterministic nature can be improved by solving simultaneously several instances of the algorithmproblem pairs. These pairs may include different instances of a problem (like starting from a different initial state), different algorithms (if several alternatives exist) , or several instances of the same algorithm (for nondeterministic algorithms).
Strategies for solving SAT in grids by randomized search
 IN: 9TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND SYMBOLIC COMPUTATION (AISC
, 2008
"... Grid computing offers a promising approach to solving challenging computational problems in an environment consisting of a large number of easily accessible resources. In this paper we develop strategies for solving collections of hard instances of the propositional satisfiability problem (SAT) wit ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
Grid computing offers a promising approach to solving challenging computational problems in an environment consisting of a large number of easily accessible resources. In this paper we develop strategies for solving collections of hard instances of the propositional satisfiability problem (SAT) with a randomized SAT solver run in a Grid. We study alternative strategies by using a simulation framework which is composed of (i) a grid model capturing the communication and management delays, and (ii) runtime distributions of a randomized solver, obtained by running a stateoftheart SAT solver on a collection of hard instances. The results are experimentally validated in a production level Grid. When solving a single hard SAT instance, the results show that in practice only a relatively small amount of parallelism can be efficiently used; the speedup obtained by increasing parallelism thereafter is negligible. This observation leads to a novel strategy of using grid to solve collections of hard instances. Instead of solving instances onebyone, the strategy aims at decreasing the overall solution time by applying an alternating distribution schedule.
Randomization and HeavyTailed Behavior in Proof Planning
, 2000
"... Proof planning is the application of Artificial Intelligence planning techniques to prove mathematical theorems. While exploring the domain of the residue classes over the integers with the multistrategy proof planner Multi we found a class of hard problems on which proof planning showed a remarkab ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Proof planning is the application of Artificial Intelligence planning techniques to prove mathematical theorems. While exploring the domain of the residue classes over the integers with the multistrategy proof planner Multi we found a class of hard problems on which proof planning showed a remarkable high degree of variance. On problems of the same complexity we either succeeded very quickly with short proofs or the proof planning process took significantly longer and resulted in a large proof. Recent work in Artificial Intelligence points out that the unpredictability in the running time of heuristic search procedures can often be explained by the phenomenon of heavytailed cost distributions. Because of the nonstandard nature of these heavytailed cost distributions the controled introduction of randomization into the search procedures and quick restarts of the randomized procedure can eliminate heavytailed behavior and can take advantage of short runs. In this report,...
Ensemblebased prediction of SAT search behaviour
, 2001
"... Before attempting to solve an instance of the satisfiability problem, what can we ascertain about the instance at hand and how can we put that information to use when selecting and tuning a SAT algorithm to solve the instance? We argue for an ensemblebased approach and describe an illustrative exam ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Before attempting to solve an instance of the satisfiability problem, what can we ascertain about the instance at hand and how can we put that information to use when selecting and tuning a SAT algorithm to solve the instance? We argue for an ensemblebased approach and describe an illustrative example of how such a methodology can be applied to determine optimal restart cutoff points for systematic, backtracking search procedures for SAT. We discuss the methodology and indicate how it can be applied to evaluate such strategies as restarts, algorithm comparison, randomization and portfolios of algorithms.