Results 1 
8 of
8
Satisfiability Solvers
, 2008
"... The past few years have seen an enormous progress in the performance of Boolean satisfiability (SAT) solvers. Despite the worstcase exponential run time of all known algorithms, satisfiability solvers are increasingly leaving their mark as a generalpurpose tool in areas as diverse as software and h ..."
Abstract

Cited by 48 (0 self)
 Add to MetaCart
The past few years have seen an enormous progress in the performance of Boolean satisfiability (SAT) solvers. Despite the worstcase exponential run time of all known algorithms, satisfiability solvers are increasingly leaving their mark as a generalpurpose tool in areas as diverse as software and hardware verification [29–31, 228], automatic test pattern generation [138, 221], planning [129, 197], scheduling [103], and even challenging problems from algebra [238]. Annual SAT competitions have led to the development of dozens of clever implementations of such solvers [e.g. 13,
Randomness and Structure
"... This chapter covers research in constraint programming (CP) and related areas involving random problems. Such research has played a significant role in the development of more efficient and effective algorithms, as well as in understanding the source of hardness in solving combinatorially challengin ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
This chapter covers research in constraint programming (CP) and related areas involving random problems. Such research has played a significant role in the development of more efficient and effective algorithms, as well as in understanding the source of hardness in solving combinatorially challenging problems. Random problems have proved useful in a number of different ways. Firstly, they provide a relatively “unbiased ” sample for benchmarking algorithms. In the early days of CP, many algorithms were compared using only a limited sample of problem instances. In some cases, this may have lead to premature conclusions. Random problems, by comparison, permit algorithms to be tested on statistically significant samples of hard problems. However, as we outline in the rest of this chapter, there remain pitfalls waiting the unwary in their use. For example, random problems may not contain structures found in many real world problems, and these structures can make problems much easier or much harder to solve. As a second example, the process of generating random problems may itself be “flawed”, giving problem instances which are not, at least asymptotically, combinatorially hard. Random problems have also provided insight into problem hardness. For example, the influential paper by Cheeseman, Kanefsky and Taylor [12] highlighted the computational difficulty of problems which are on the “knifeedge ” between satisfiability and unsatisfiability [84]. There is even hope within certain quarters that random problems may be one of the links in resolving the P=NP question. Finally, insight into problem hardness provided by random problems has helped inform the design of better algorithms and heuristics. For example, the design of a number of branching heuristics for the Davis Logemann Loveland satisfiability (DPLL) procedure has been heavily influenced by the hardness of random problems. As a second example, the rapid randomization and restart (RRR) strategy [45, 44] was motivated by the discovery of heavytailed runtime distributions in backtracking style search procedures on random quasigroup completion problems.
Quantum walk speedup of backtracking algorithms
, 2015
"... We describe a general method to obtain quantum speedups of classical algorithms which are based on the technique of backtracking, a standard approach for solving constraint satisfaction problems (CSPs). Backtracking algorithms explore a tree whose vertices are partial solutions to a CSP in an attemp ..."
Abstract
 Add to MetaCart
(Show Context)
We describe a general method to obtain quantum speedups of classical algorithms which are based on the technique of backtracking, a standard approach for solving constraint satisfaction problems (CSPs). Backtracking algorithms explore a tree whose vertices are partial solutions to a CSP in an attempt to find a complete solution. Assume there is a classical backtracking algorithm which finds a solution to a CSP on n variables, or outputs that none exists, and whose corresponding tree contains T vertices, each vertex corresponding to a test of a partial solution. Then we show that there is a boundederror quantum algorithm which completes the same task using O( Tn3/2 log n) tests. In particular, this quantum algorithm can be used to speed up the DPLL algorithm, which is the basis of many of the most efficient SAT solvers used in practice. The quantum algorithm is based on the use of a quantum walk algorithm of Belovs to search in the backtracking tree. We also discuss how, for certain distributions on the inputs, the algorithm can lead to an averagecase exponential speedup. 1
Complexity of different ILP models of the frequency assignment problem
, 2012
"... The frequency assignment problem (FAP) arises in wireless communication networks, such as cellular phone communication systems, television broadcasting, WLANs, and military communication systems. In all these applications, the task is to assign frequencies to a set of transmitters, subject to interf ..."
Abstract
 Add to MetaCart
(Show Context)
The frequency assignment problem (FAP) arises in wireless communication networks, such as cellular phone communication systems, television broadcasting, WLANs, and military communication systems. In all these applications, the task is to assign frequencies to a set of transmitters, subject to interference constraints. The exact form of the constraints and the objective function vary according to the specific application. Integer linear programming (ILP) is widely used to solve the different flavors of the FAP. For most FAP versions, there are more than one natural ILP formulations, e.g. using a large number of binary variables or a smaller number of integer variables. A common experience with these solution techniques, as well as with NPhard optimization problems in general, is a high variance in problem complexity. Some problem instances are tremendously hard to solve optimally. There are also examples of relatively big problem instances that are nevertheless quite easy to solve. In general, it is hard to predict how long it will take to solve a given problem instance. This
Accelerating SAT solving with bestfirstsearch∗
, 2014
"... Solvers for Boolean satisfiability (SAT), like other algorithms for NPcomplete problems, tend to have a heavytailed runtime distribution. Successful SAT solvers make use of frequent restarts to mitigate this problem by abandoning unfruitful parts of the search space after some time. Although frequ ..."
Abstract
 Add to MetaCart
(Show Context)
Solvers for Boolean satisfiability (SAT), like other algorithms for NPcomplete problems, tend to have a heavytailed runtime distribution. Successful SAT solvers make use of frequent restarts to mitigate this problem by abandoning unfruitful parts of the search space after some time. Although frequent restarting works fairly well, it is a quite simplistic technique that does not do anything explicitly to make the next try better than the previous one. In this paper, we suggest a more sophisticated method: using a bestfirstsearch approach to quickly move between different parts of the search space. This way, the search can always focus on the most promising region. We investigate empirically how the performance of frequent restarts, bestfirstsearch, and a combination of the two compare to each other. Our findings indicate that the combined method works best, improving 3643 % on the performance of frequent restarts on the used set of benchmark problems. 1