Results 1  10
of
33
AND/OR branchandbound search for combinatorial optimization in graphical models
, 2008
"... We introduce a new generation of depthfirst BranchandBound algorithms that explore the AND/OR search tree using static and dynamic variable orderings for solving general constraint optimization problems. The virtue of the AND/OR representation of the search space is that its size may be far small ..."
Abstract

Cited by 39 (19 self)
 Add to MetaCart
(Show Context)
We introduce a new generation of depthfirst BranchandBound algorithms that explore the AND/OR search tree using static and dynamic variable orderings for solving general constraint optimization problems. The virtue of the AND/OR representation of the search space is that its size may be far smaller than that of a traditional OR representation, which can translate into significant time savings for search algorithms. The focus of this paper is on linear space search which explores the AND/OR search tree rather than the search graph and therefore make no attempt to cache information. We investigate the power of the minibucket heuristics within the AND/OR search space, in both static and dynamic setups. We focus on two most common optimization problems in graphical models: finding the Most Probable Explanation (MPE) in Bayesian networks and solving Weighted CSPs (WCSP). In extensive empirical evaluations we demonstrate that the new AND/OR BranchandBound approach improves considerably over the traditional OR search strategy and show how various variable ordering schemes impact the performance of the AND/OR search scheme.
Random constraint satisfaction: easy generation of hard (satisfiable) instances
 Artificial Intelligence
"... rue de l’université, SP 16 ..."
(Show Context)
Partition Search for Nonbinary Constraint Satisfaction
 Information Sciences
, 2007
"... Previous algorithms for unrestricted constraint satisfaction use reduction search, which inferentially removes values from domains in order to prune the backtrack search tree. This paper introduces partition search, which uses an efficient join mechanism instead of removing values from domains. Anal ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
(Show Context)
Previous algorithms for unrestricted constraint satisfaction use reduction search, which inferentially removes values from domains in order to prune the backtrack search tree. This paper introduces partition search, which uses an efficient join mechanism instead of removing values from domains. Analytical prediction of quantitative performance of partition search appears to be intractable and evaluation therefore has to be by experimental comparison with reduction search algorithms that represent the state of the art. Instead of working only with available reduction search algorithms, this paper introduces enhancements such as semijoin reduction preprocessing using Bloom filtering.
Backtracking Search Algorithms
, 2006
"... There are three main algorithmic techniques for solving constraint satisfaction problems: backtracking search, local search, and dynamic programming. In this chapter, I survey backtracking search algorithms. Algorithms based on dynamic programming [15]— sometimes referred to in the literature as var ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
There are three main algorithmic techniques for solving constraint satisfaction problems: backtracking search, local search, and dynamic programming. In this chapter, I survey backtracking search algorithms. Algorithms based on dynamic programming [15]— sometimes referred to in the literature as variable elimination, synthesis, or inference algorithms—are the topic of Chapter 7. Local or stochastic search algorithms are the topic of Chapter 5. An algorithm for solving a constraint satisfaction problem (CSP) can be either complete or incomplete. Complete, or systematic algorithms, come with a guarantee that a solution will be found if one exists, and can be used to show that a CSP does not have a solution and to find a provably optimal solution. Backtracking search algorithms and dynamic programming algorithms are, in general, examples of complete algorithms. Incomplete, or nonsystematic algorithms, cannot be used to show a CSP does not have a solution or to find a provably optimal solution. However, such algorithms are often effective at finding a solution if one exists and can be used to find an approximation to an optimal solution. Local or stochastic search algorithms are examples of incomplete algorithms. Of the two
Nogood recording from restarts
 In Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI’2007
, 2007
"... In this paper, nogood recording is investigated within the randomization and restart framework. Our goal is to avoid the same situations to occur from one run to the next one. More precisely, nogoods are recorded when the current cutoff value is reached, i.e. before restarting the search algorithm. ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
(Show Context)
In this paper, nogood recording is investigated within the randomization and restart framework. Our goal is to avoid the same situations to occur from one run to the next one. More precisely, nogoods are recorded when the current cutoff value is reached, i.e. before restarting the search algorithm. Such a set of nogoods is extracted from the last branch of the current search tree. Interestingly, the number of nogoods recorded before each new run is bounded by the length of the last branch of the search tree. As a consequence, the total number of recorded nogoods is polynomial in the number of restarts. Experiments over a wide range of CSP instances demonstrate the effectiveness of our approach. 1
Description and representation of the problems selected for the first international constraint satisfaction solver competition
 In Proceedings of CPAI’05 workshop held with CP’05
, 2005
"... Abstract. In this paper, we present the problems that have been selected for the first international competition of CSP solvers. First, we introduce a succinct description of each problem and then, we present the two formats that have been used to represent the CSP instances. 1 ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we present the problems that have been selected for the first international competition of CSP solvers. First, we introduce a succinct description of each problem and then, we present the two formats that have been used to represent the CSP instances. 1
Last conflict based reasoning
 In Proceedings of ECAI2006
, 2006
"... Abstract. In this paper, we propose an approach to guide search to sources of conflicts. The principle is the following: the last variable involved in the last conflict is selected in priority, as long as the constraint network can not be made consistent, in order to find the (most recent) culprit v ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we propose an approach to guide search to sources of conflicts. The principle is the following: the last variable involved in the last conflict is selected in priority, as long as the constraint network can not be made consistent, in order to find the (most recent) culprit variable, following the current partial instantiation from the leaf to the root of the search tree. In other words, the variable ordering heuristic is violated, until a backtrack to the culprit variable occurs and a singleton consistent value is found. Consequently, this way of reasoning can easily be grafted to many search algorithms and represents an original way to avoid thrashing. Experiments over a wide range of benchmarks demonstrate the effectiveness of this approach. 1
S.L.: Random Subsets Support Learning a Mixture of Heuristics
 20th International FLAIRS Conference (FLAIRS07), Key
, 2007
"... Problem solvers, both human and machine, have at their disposal many heuristics that may support effective search. The efficacy of these heuristics, however, varies with the problem class, and their mutual interactions may not be well understood. The longterm goal of our work is to learn how to sel ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Problem solvers, both human and machine, have at their disposal many heuristics that may support effective search. The efficacy of these heuristics, however, varies with the problem class, and their mutual interactions may not be well understood. The longterm goal of our work is to learn how to select appropriately from among a large body of heuristics, and how to combine them into a mixture that works well on a specific class of problems. The principal result reported here is that randomly chosen subsets of heuristics can improve the identification of an appropriate mixture of heuristics. A selfsupervised learner uses this method here to learn to solve constraint satisfaction problems quickly and effectively.
Reasoning from Last Conflict(s) in Constraint Programming
, 2009
"... Constraint programming is a popular paradigm to deal with combinatorial problems in artificial intelligence. Backtracking algorithms, applied to constraint networks, are commonly used but suffer from thrashing, i.e. the fact of repeatedly exploring similar subtrees during search. An extensive litera ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Constraint programming is a popular paradigm to deal with combinatorial problems in artificial intelligence. Backtracking algorithms, applied to constraint networks, are commonly used but suffer from thrashing, i.e. the fact of repeatedly exploring similar subtrees during search. An extensive literature has been devoted to prevent thrashing, often classified into lookahead (constraint propagation and search heuristics) and lookback (intelligent backtracking and learning) approaches. In this paper, we present an original lookahead approach that allows to guide backtrack search toward sources of conflicts and, as a side effect, to obtain a behavior similar to a backjumping technique. The principle is the following: after each conflict, the last assigned variable is selected in priority, so long as the constraint network cannot be made consistent. This allows us to find, following the current partial instantiation from the leaf to the root of the search tree, the culprit decision that prevents the last variable from being assigned. This way of reasoning can easily be grafted to many variations of backtracking algorithms and represents an original mechanism to reduce thrashing. Moreover, we show that this approach can be generalized so as to collect a (small) set of incompatible variables that are together responsible for the last conflict. Experiments over a wide range of benchmarks demonstrate the effectiveness of this approach in both constraint satisfaction and automated artificial intelligence planning.
Learning from failure in constraint satisfaction search
, 2006
"... Much work has been done on learning from failure in search to boost solving of combinatorial problems, such as clauselearning in boolean satisfiability (SAT), nogood and explanationbased learning, and constraint weighting in constraint satisfaction problems (CSPs), etc. Many of the top solvers in ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Much work has been done on learning from failure in search to boost solving of combinatorial problems, such as clauselearning in boolean satisfiability (SAT), nogood and explanationbased learning, and constraint weighting in constraint satisfaction problems (CSPs), etc. Many of the top solvers in SAT use clause learning to good effect. A similar approach (nogood learning) has not had as large an impact in CSPs. Constraint weighting is a less fine grained approach where the information learnt gives an approximation as to which variables may be the sources of greatest contention. In this paper we present a method for learning from search using restarts, in order to identify these critical variables in a given constraint satisfaction problem, prior to solving. Our method is based on the conflictdirected heuristic (weighteddegree heuristic) introduced by Boussemart et al. and is aimed at producing a betterinformed version of the heuristic by gathering information through restarting and probing of the search space prior to solving, while minimising the overhead of these restarts/probes. We show that random probing of the search space can boost the heuristics power by improving early decisions in search. We also provide an indepth analysis of the effects of constraint weighting.