Results 1 - 10
of
20
Algorithms for the Satisfiability (SAT) Problem: A Survey
- DIMACS Series in Discrete Mathematics and Theoretical Computer Science
, 1996
"... . The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computer-aided design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, compute ..."
Abstract
-
Cited by 145 (3 self)
- Add to MetaCart
(Show Context)
. The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computer-aided design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, computer architecture design, and computer network design. Traditional methods treat SAT as a discrete, constrained decision problem. In recent years, many optimization methods, parallel algorithms, and practical techniques have been developed for solving SAT. In this survey, we present a general framework (an algorithm space) that integrates existing SAT algorithms into a unified perspective. We describe sequential and parallel SAT algorithms including variable splitting, resolution, local search, global optimization, mathematical programming, and practical SAT algorithms. We give performance evaluation of some existing SAT algorithms. Finally, we provide a set of practical applications of the sat...
Non-Failure Analysis for Logic Programs
- ACM Transactions on Programming Languages and Systems
, 1997
"... We provide a method whereby, given mode and (upper approximation) type information, we can detect procedures and goals that can be guaranteed to not fail (i.e., to produce at least one solution or not terminate). The technique is based on an intuitively very simple notion, that of a (set of) tests & ..."
Abstract
-
Cited by 130 (14 self)
- Add to MetaCart
We provide a method whereby, given mode and (upper approximation) type information, we can detect procedures and goals that can be guaranteed to not fail (i.e., to produce at least one solution or not terminate). The technique is based on an intuitively very simple notion, that of a (set of) tests "covering" the type of a set of variables. We show that the problem of determining a covering is undecidable in general, and give decidability and complexity results for the Herbrand and linear arithmetic constraint systems. We give sound algorithms for determining covering that are precise and efficient in practice. Based on this information, we show how to identify goals and procedures that can be guaranteed to not fail at runtime. Applications of such non-failure information include programming error detection, program transformations and parallel execution optimization, avoiding speculative parallelism and estimating lower bounds on the computational costs of goals, which can be used for ...
Branch and Bound Algorithm Selection by Performance Prediction
- In AAAI
, 1998
"... We propose a method called Selection by Performance Prediction (SPP) which allows one, when faced with a particular problem instance, to select a Branch and Bound algorithm from among several promising ones. This method is based on Knuth's sampling method which estimates the efficiency of ..."
Abstract
-
Cited by 37 (1 self)
- Add to MetaCart
(Show Context)
We propose a method called Selection by Performance Prediction (SPP) which allows one, when faced with a particular problem instance, to select a Branch and Bound algorithm from among several promising ones. This method is based on Knuth's sampling method which estimates the efficiency of a backtrack program on a particular instance by iteratively generating random paths in the search tree. We present a simple adaptation of this estimator in the field of combinatorial optimization problems, more precisely for an extension of the maximal constraint satisfaction framework. Experiments both on random and strongly structured instances show that, in most cases, the proposed method is able to select, from a candidate list, the best algorithm for solving a given instance. Introduction The Branch and Bound search is a well-known algorithmic schema, widely used for solving combinatorial optimization problems. A lot of specific algorithms can be derived from this general schema. ...
Estimating Search Tree Size
- In Proceedings of the 21st National Conference on Artificial Intelligence (AAAI ’06
, 2006
"... We propose two new online methods for estimating the size of a backtracking search tree. The first method is based on a weighted sample of the branches visited by chronologi-cal backtracking. The second is a recursive method based on assuming that the unexplored part of the search tree will be simil ..."
Abstract
-
Cited by 26 (2 self)
- Add to MetaCart
(Show Context)
We propose two new online methods for estimating the size of a backtracking search tree. The first method is based on a weighted sample of the branches visited by chronologi-cal backtracking. The second is a recursive method based on assuming that the unexplored part of the search tree will be similar to the part we have so far explored. We compare these methods against an old method due to Knuth based on random probing. We show that these methods can reliably estimate the size of search trees explored by both optimiza-tion and decision procedures. We also demonstrate that these methods for estimating search tree size can be used to select the algorithm likely to perform best on a particular problem instance.
Early estimates of the size of branch-and-bound trees
- INFORMS Journal on Computing
, 2006
"... This paper intends to show that the time needed to solve mixed integer programming problems by branch and bound can be roughly predicted early in the solution process. We construct a procedure that can be implemented as part of an MIP solver. It is based on analyzing the partial tree resulting from ..."
Abstract
-
Cited by 15 (0 self)
- Add to MetaCart
This paper intends to show that the time needed to solve mixed integer programming problems by branch and bound can be roughly predicted early in the solution process. We construct a procedure that can be implemented as part of an MIP solver. It is based on analyzing the partial tree resulting from running the algorithm for a short period of time, and predicting the shape of the whole tree. The procedure is tested on instances from the literature. This work was inspired by the practical applicability of such a result. 1.
Satometer: How Much Have We Searched?
- In Design Automation Conf., 737–742. IEEE
, 2002
"... We introduce Satometer, a tool that can be used to estimate the percentage of the search space actually explored by a backtrack SAT solver. Satometer calculates a normalized rainterm count for those portions of the search space identified by conflicts. The computation is carried out using a zero-sup ..."
Abstract
-
Cited by 12 (0 self)
- Add to MetaCart
(Show Context)
We introduce Satometer, a tool that can be used to estimate the percentage of the search space actually explored by a backtrack SAT solver. Satometer calculates a normalized rainterm count for those portions of the search space identified by conflicts. The computation is carried out using a zero-suppressed BDD data structure and can have adjustable accuracy. The data provided by Satometer can help diagnose the performance of SAT solvers and can shed light on the nature of a SAT instance.
Expected Solution Quality
- In Proceedings of the 14th International Joint Conference on Artificial Intelligence
, 1995
"... This paper presents the Expected Solution Quality (esq) method for statistically characterizing scheduling problems and the performance of schedulers. The esq method is demonstrated by applying it to a practical telescope scheduling problem. The method addresses the important and difficult issue of ..."
Abstract
-
Cited by 9 (2 self)
- Add to MetaCart
This paper presents the Expected Solution Quality (esq) method for statistically characterizing scheduling problems and the performance of schedulers. The esq method is demonstrated by applying it to a practical telescope scheduling problem. The method addresses the important and difficult issue of how to meaningfully evaluate the performance of a scheduler on a constrained optimization problem for which an optimal solution is not known. At the heart of esq is a Monte Carlo algorithm that estimates a problem's probability density function with respect to solution quality. This "quality density function" provides a useful characterization of a scheduling problem, and it also provides a background against which scheduler performance can be meaningfully evaluated. esq provides a unitless measure that combines both schedule quality and the amount of time to generate a schedule. 1 Introduction This paper presents a method for statistically characterizing both scheduling problems and the p...
Domain Independant Heuristics in Hybrid Algorithms for CSP's
, 1994
"... Over the years a large number of algorithms has been discovered to solve instances of CSP problems. In a recent paper Prosser [9] proposed a new approach to these algorithms by splitting them up in groups with identical forward (Backtracking, Backjumping, Conflict-Directed Backjumping) and backward ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
(Show Context)
Over the years a large number of algorithms has been discovered to solve instances of CSP problems. In a recent paper Prosser [9] proposed a new approach to these algorithms by splitting them up in groups with identical forward (Backtracking, Backjumping, Conflict-Directed Backjumping) and backward (Backtracking, Backmarking, Forward Checking) moves. By combining the forward move of an algorithm from the first group and the backward move of an algorithm from the second group he was able to develop four new hybrid algorithms: Backmarking with Backjumping (BMJ), Backmarking with Conflict-Directed Backjumping (BMCBJ) , Forward Checking with Backjumping (FC-BJ) and Forward Checking with Conflict-Directed Backjumping (FC-CBJ). Variable reordering heuristics have been suggested by, among others, by Haralick [6] and Purdom [11, 14] to improve the standard CSP algorithms. They obtained both analytical and empiral results about the performance of these heuristics in their research. In this thes...
Predicting the Size of Depth-First Branch and Bound Search Trees
, 2013
"... This paper provides algorithms for predicting the size of the Expanded Search Tree (EST) of Depth-first Branch and Bound algorithms (DFBnB) for optimization tasks. The prediction algorithm is implemented and evaluated in the context of solving combinatorial optimization problems over graphical model ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
This paper provides algorithms for predicting the size of the Expanded Search Tree (EST) of Depth-first Branch and Bound algorithms (DFBnB) for optimization tasks. The prediction algorithm is implemented and evaluated in the context of solving combinatorial optimization problems over graphical models such as Bayesian and Markov networks. Our methods extend to DFBnB the approaches provided by Knuth-Chen schemes that were designed and applied for predicting the EST size of backtracking search algorithms. Our empirical results demonstrate good predictions which are superior to competing schemes.
Learning Good Variable Orderings
, 2003
"... Variable ordering heuristics are used to reduce the cost of searching for a solution to a constraint satisfaction problem. On real problems that have non-binary and non-uniform constraints it is harder to make the optimal choice of variable ordering because surprisingly little is known about whe ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
(Show Context)
Variable ordering heuristics are used to reduce the cost of searching for a solution to a constraint satisfaction problem. On real problems that have non-binary and non-uniform constraints it is harder to make the optimal choice of variable ordering because surprisingly little is known about when and why variable ordering heuristics perform well. In an