Results 11  20
of
74
Partition Search for Nonbinary Constraint Satisfaction
 Information Sciences
, 2007
"... Previous algorithms for unrestricted constraint satisfaction use reduction search, which inferentially removes values from domains in order to prune the backtrack search tree. This paper introduces partition search, which uses an efficient join mechanism instead of removing values from domains. Anal ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
(Show Context)
Previous algorithms for unrestricted constraint satisfaction use reduction search, which inferentially removes values from domains in order to prune the backtrack search tree. This paper introduces partition search, which uses an efficient join mechanism instead of removing values from domains. Analytical prediction of quantitative performance of partition search appears to be intractable and evaluation therefore has to be by experimental comparison with reduction search algorithms that represent the state of the art. Instead of working only with available reduction search algorithms, this paper introduces enhancements such as semijoin reduction preprocessing using Bloom filtering.
Conservative Dual Consistency
 In Proceedings of AAAI’07
, 2007
"... Consistencies are properties of Constraint Networks (CNs) that can be exploited in order to make inferences. When a significant amount of such inferences can be performed, CNs are much easier to solve. In this paper, we interest ourselves in relation filtering consistencies for binary constraints, i ..."
Abstract

Cited by 13 (9 self)
 Add to MetaCart
(Show Context)
Consistencies are properties of Constraint Networks (CNs) that can be exploited in order to make inferences. When a significant amount of such inferences can be performed, CNs are much easier to solve. In this paper, we interest ourselves in relation filtering consistencies for binary constraints, i.e. consistencies that allow to identify inconsistent pairs of values. We propose a new consistency called Dual Consistency (DC) and relate it to Path Consistency (PC). We show that Conservative DC (CDC, i.e. DC with only relations associated with the constraints of the network considered) is more powerful, in terms of filtering, than Conservative PC (CPC). Following the approach of Mac Gregor, we introduce an algorithm to establish (strong) CDC with a very low worstcase space complexity. Even if the relative efficiency of the algorithm introduced to establish (strong) CDC partly depends on the density of the constraint graph, the experiments we have conducted show that, on many series of CSP instances, CDC is largely faster than CPC (up to more than one order of magnitude). Besides, we have observed that enforcing CDC in a preprocessing stage can significantly speed up the resolution of hard structured instances.
Heuristics for dynamically adapting propagation
 In ECAI2008
, 2008
"... Building adaptive constraint solvers is a major challenge in constraint programming. An important line of research towards this goal is concerned with ways to dynamically adapt the level of local consistency applied during search. A related problem that is receiving a lot of attention is the design ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
(Show Context)
Building adaptive constraint solvers is a major challenge in constraint programming. An important line of research towards this goal is concerned with ways to dynamically adapt the level of local consistency applied during search. A related problem that is receiving a lot of attention is the design of adaptive branching heuristics. The recently proposed adaptive variable ordering heuristics of Boussemart et al. use information derived from domain wipeouts to identify highly active constraints and focus search on hard parts of the problem resulting in important saves in search effort. In this paper we show how information about domain wipeouts and value deletions gathered during search can be exploited, not only to perform variable selection, but also to dynamically adapt the level of constraint propagation achieved on the constraints of the problem. First we demonstrate that when an adaptive heuristic is used, value deletions and domain wipeouts caused by individual constraints largely occur in clusters of consecutive or nearby constraint revisions. Based on this observation, we develop a number of simple heuristics that allow us to dynamically switch between enforcing a weak, and cheap local consistency, and a strong but more expensive one, depending on the activity of individual constraints. As a case study we experiment with binary problems using AC as the weak consistency and maxRPC as the strong one. Results from various domains demonstrate the usefulness of the proposed heuristics. 1
Connecting ABT with arc consistency
 IN: CP
"... ABT is the reference algorithm for asynchronous distributed constraint satisfaction. When searching, ABT produces nogoods as justifications of deleted values. When one of such nogoods has an empty lefthand side, the considered value is eliminated unconditionally, once and for all. This value dele ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
(Show Context)
ABT is the reference algorithm for asynchronous distributed constraint satisfaction. When searching, ABT produces nogoods as justifications of deleted values. When one of such nogoods has an empty lefthand side, the considered value is eliminated unconditionally, once and for all. This value deletion can be propagated using standard arc consistency techniques, producing new deletions in the domains of other variables. This causes substantial reductions in the search effort required to solve a class of problems. We also extend this idea to the propagation of conditional deletions, something already proposed in the past. We provide experimental results that show the benefits of the proposed approach, especially considering communication cost.
Reasoning from Last Conflict(s) in Constraint Programming
, 2009
"... Constraint programming is a popular paradigm to deal with combinatorial problems in artificial intelligence. Backtracking algorithms, applied to constraint networks, are commonly used but suffer from thrashing, i.e. the fact of repeatedly exploring similar subtrees during search. An extensive litera ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Constraint programming is a popular paradigm to deal with combinatorial problems in artificial intelligence. Backtracking algorithms, applied to constraint networks, are commonly used but suffer from thrashing, i.e. the fact of repeatedly exploring similar subtrees during search. An extensive literature has been devoted to prevent thrashing, often classified into lookahead (constraint propagation and search heuristics) and lookback (intelligent backtracking and learning) approaches. In this paper, we present an original lookahead approach that allows to guide backtrack search toward sources of conflicts and, as a side effect, to obtain a behavior similar to a backjumping technique. The principle is the following: after each conflict, the last assigned variable is selected in priority, so long as the constraint network cannot be made consistent. This allows us to find, following the current partial instantiation from the leaf to the root of the search tree, the culprit decision that prevents the last variable from being assigned. This way of reasoning can easily be grafted to many variations of backtracking algorithms and represents an original mechanism to reduce thrashing. Moreover, we show that this approach can be generalized so as to collect a (small) set of incompatible variables that are together responsible for the last conflict. Experiments over a wide range of benchmarks demonstrate the effectiveness of this approach in both constraint satisfaction and automated artificial intelligence planning.
Abscon 109 A generic CSP solver
"... Abstract. This paper describes the algorithms, heuristics and general strategies used by the two solvers which have been elaborated from the Abscon platform and submitted to the second CSP solver competition. Both solvers maintain generalized arc consistency during search, explore the search space u ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
Abstract. This paper describes the algorithms, heuristics and general strategies used by the two solvers which have been elaborated from the Abscon platform and submitted to the second CSP solver competition. Both solvers maintain generalized arc consistency during search, explore the search space using a conflictdirected variable ordering heuristic, integrate nogood recording from restarts and exploit a transposition table approach to prune the search space. At preprocessing, the first solver enforces generalized arc consistency whereas the second one enforces existential SGAC, a partial form of singleton generalized arc consistency. 1
Path Consistency by Dual Consistency
"... Abstract. Dual Consistency (DC) is a property of Constraint Networks (CNs) which is equivalent, in its unrestricted form, to Path Consistency (PC). The principle is to perform successive singleton checks (i.e. enforcing arc consistency after the assignment of a value to a variable) in order to ident ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Dual Consistency (DC) is a property of Constraint Networks (CNs) which is equivalent, in its unrestricted form, to Path Consistency (PC). The principle is to perform successive singleton checks (i.e. enforcing arc consistency after the assignment of a value to a variable) in order to identify inconsistent pairs of values, until a fixpoint is reached. In this paper, we propose two new algorithms, denoted by sDC2 and sDC3, to enforce (strong) PC following the DC approach. These algorithms can be seen as refinements of Mac Gregor’s algorithm as they partially and totally exploit the incrementality of the underlying Arc Consistency algorithm. While sDC3 admits the same interesting worstcase complexities as PC8, sDC2 appears to be the most robust algorithm in practice. Indeed, compared to PC8 and the optimal PC2001, sDC2 is usually around one order of magnitude faster on large instances. 1
Constraintlevel Advice for Shaving
"... Abstract. This work concentrates on improving the robustness of constraint solvers by increasing the propagation strength of constraint models in a declarative and automatic manner. Our objective is to efficiently identify and remove shavable values during search. A value is shavable if as soon as i ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Abstract. This work concentrates on improving the robustness of constraint solvers by increasing the propagation strength of constraint models in a declarative and automatic manner. Our objective is to efficiently identify and remove shavable values during search. A value is shavable if as soon as it is assigned to its associated variable an inconsistency can be detected, making it possible to refute it. We extend previous work on shaving by using different techniques to decide if a given value is an interesting candidate for the shaving process. More precisely, we exploit the semantics of (global) constraints to suggest values, and reuse both the successes and failures of shaving later in search to tune shaving further. We illustrate our approach with two important global constraints, namely alldifferent and sum, and present the results of an experimentation obtained for three problem classes. The experimental results are quite encouraging: we are able to significantly reduce the number of search nodes (even by more than two orders of magnitude), and improve the average execution time by one order of magnitude. 1
PORTFOLIOS WITH DEADLINES FOR BACKTRACKING SEARCH
 INTERNATIONAL JOURNAL ON ARTIFICIAL INTELLIGENCE TOOLS
, 2008
"... Backtracking search is often the method of choice for solving constraint satisfaction and propositional satisfiability problems. Previous studies have shown that portfolios of backtracking algorithms—a selection of one or more algorithms plus a schedule for executing the algorithms—can dramatically ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Backtracking search is often the method of choice for solving constraint satisfaction and propositional satisfiability problems. Previous studies have shown that portfolios of backtracking algorithms—a selection of one or more algorithms plus a schedule for executing the algorithms—can dramatically improve performance on some instances. In this paper, we consider a setting that often arises in practice where the instances to be solved arise over time, the instances all belong to some class of problem instances, and a limit or deadline is placed on the computational resources that can be consumed in solving any instance. For such a scenario, we present a simple scheme for learning a good portfolio of backtracking algorithms from a small sample of instances. We demonstrate the effectiveness of our approach through an extensive empirical evaluation using two testbeds: realworld instruction scheduling problems and the widely used quasigroup completion problems.