Results 1  10
of
76
Domain filtering consistencies for nonbinary constraints
 ARTIFICIAL INTELLIGENCE
, 2008
"... In nonbinary constraint satisfaction problems, the study of local consistencies that only prune values from domains has so far been largely limited to generalized arc consistency or weaker local consistency properties. This is in contrast with binary constraints where numerous such domain filtering ..."
Abstract

Cited by 26 (9 self)
 Add to MetaCart
(Show Context)
In nonbinary constraint satisfaction problems, the study of local consistencies that only prune values from domains has so far been largely limited to generalized arc consistency or weaker local consistency properties. This is in contrast with binary constraints where numerous such domain filtering consistencies have been proposed. In this paper we present a detailed theoretical, algorithmic and empirical study of domain filtering consistencies for nonbinary problems. We study three domain filtering consistencies that are inspired by corresponding variable based domain filtering consistencies for binary problems. These consistencies are stronger than generalized arc consistency, but weaker than pairwise consistency, which is a strong consistency that removes tuples from constraint relations. Among other theoretical results, and contrary to expectations, we prove that these new consistencies do not reduce to the variable based definitions of their counterparts on binary constraints. We propose a number of algorithms to achieve the three consistencies. One of these algorithms has a time complexity comparable to that for generalized arc consistency despite performing more pruning. Experiments demonstrate that our new consistencies are promising as they can be more efficient than generalized arc consistency on certain nonbinary problems.
Optimization of simple tabular reduction for table constraints
 In Proceedings of CP’08
, 2008
"... Abstract. Table constraints play an important role within constraint programming. Recently, many schemes or algorithms have been proposed to propagate table constraints or/and to compress their representation. We show that simple tabular reduction (STR), a technique proposed by J. Ullmann to dynamic ..."
Abstract

Cited by 25 (12 self)
 Add to MetaCart
(Show Context)
Abstract. Table constraints play an important role within constraint programming. Recently, many schemes or algorithms have been proposed to propagate table constraints or/and to compress their representation. We show that simple tabular reduction (STR), a technique proposed by J. Ullmann to dynamically maintain the tables of supports, is very often the most efficient practical approach to enforce generalized arc consistency within MAC. We also describe an optimization of STR which allows limiting the number of operations related to validity checking or search of supports. Interestingly enough, this optimization makes STR potentially r times faster where r is the arity of the constraint(s). The results of an extensive experimentation that we have conducted with respect to random and structured instances indicate that the optimized algorithm we propose is usually around twice as fast as the original STR and can be up to one order of magnitude faster than previous stateoftheart algorithms on some series of instances. 1
Transforming Attribute and Cloneenabled Feature Models into Constraint Programs over Finite Domains
, 2011
"... Abstract: Product line models are important artefacts in product line engineering. One of the most popular languages to model the variability of a product line is the feature notation. Since the initial proposal of feature models in 1990, the notation has evolved in different aspects. One of the mos ..."
Abstract

Cited by 10 (7 self)
 Add to MetaCart
(Show Context)
Abstract: Product line models are important artefacts in product line engineering. One of the most popular languages to model the variability of a product line is the feature notation. Since the initial proposal of feature models in 1990, the notation has evolved in different aspects. One of the most important improvements allows specify the number of instances that a feature can have in a particular product. This improvement implies an important increase on the number of variables needed to represent a feature model. Another improvement consists in allowing features to have attributes, which can take values on a different domain than the boolean one. These two extensions have increased the complexity of feature models and therefore have made more difficult the manually or even automated reasoning on feature models. To the best of our knowledge, very few works exist in literature to address this problem. In this paper we show that reasoning on extended feature models is easy and scalable by using constraint programming over integer domains. The aim of the paper is double (a) to show the rules for transforming extended feature models into constraint programs, and (b) to demonstrate, by means of 11 reasoning operations over feature models, the usefulness and benefits of our approach. We evaluated our approach by transforming 60 feature models of sizes up to 2000 features and by comparing it with 2 other approaches available in the literature. The evaluation showed that our approach is correct, useful and scalable to industry size models. 1
Qualitative CSP, Finite CSP, and SAT: Comparing Methods for Qualitative Constraintbased Reasoning
"... Qualitative Spatial and Temporal Reasoning (QSR) is concerned with constraintbased formalisms for representing, and reasoning with, spatial and temporal information over infinite domains. Within the QSR community it has been a widely accepted assumption that genuine qualitative reasoning methods ou ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Qualitative Spatial and Temporal Reasoning (QSR) is concerned with constraintbased formalisms for representing, and reasoning with, spatial and temporal information over infinite domains. Within the QSR community it has been a widely accepted assumption that genuine qualitative reasoning methods outperform other reasoning methods that are applicable to encodings of qualitative CSP instances. Recently this assumption has been tackled by several authors, who proposed to encode qualitative CSP instances as finite CSP or SAT instances. In this paper we report on the results of a broad empirical study in which we compared the performance of several reasoners on instances from different qualitative formalisms. Our results show that for smallsized qualitative calculi (e.g., Allen’s interval algebra and RCC8) a stateoftheart implementation of QSR methods currently gives the most efficient performance. However, on recently suggested largesize calculi, e.g., OPRA4, finite CSP encodings provide a considerable performance gain. These results confirm a conjecture by Bessière stating that supportbased constraint propagation algorithms provide better performance for largesized qualitative calculi. 1
Kernels for Global Constraints
 PROCEEDINGS OF THE TWENTYSECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
, 2011
"... Bessière et al. (AAAI’08) showed that several intractable global constraints can be efficiently propagated when certain natural problem parameters are small. In particular, the complete propagation of a global constraint is fixedparameter tractable in k – the number of holes in domains – whenever b ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
Bessière et al. (AAAI’08) showed that several intractable global constraints can be efficiently propagated when certain natural problem parameters are small. In particular, the complete propagation of a global constraint is fixedparameter tractable in k – the number of holes in domains – whenever bound consistency can be enforced in polynomial time; this applies to the global constraints ATMOSTNVALUE and EXTENDED GLOBAL CARDINALITY (EGC). In this paper we extend this line of research and introduce the concept of reduction to a problem kernel, a key concept of parameterized complexity, to the field of global constraints. In particular, we show that the consistency problem for ATMOSTNVALUE constraints admits a linear time reduction to an equivalent instance on O(k2) variables and domain values. This small kernel can be used to speed up the complete propagation of NVALUE constraints. We contrast this result by showing that the consistency problem for EGC constraints does not admit a reduction to a polynomial problem kernel unless the polynomial hierarchy collapses.
An Application of Constraint Programming to Superblock Instruction Scheduling
"... Abstract. Modern computer architectures have complex features that can only be fully taken advantage of if the compiler schedules the compiled code. A standard region of code for scheduling in an optimizing compiler is called a superblock. Scheduling superblocks optimally is known to be NPcomplete, ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Modern computer architectures have complex features that can only be fully taken advantage of if the compiler schedules the compiled code. A standard region of code for scheduling in an optimizing compiler is called a superblock. Scheduling superblocks optimally is known to be NPcomplete, and production compilers use nonoptimal heuristic algorithms. In this paper, we present an application of constraint programming to the superblock instruction scheduling problem. The resulting system is both optimal and fast enough to be incorporated into production compilers, and is the first optimal superblock scheduler for realistic architectures. In developing our optimal scheduler, the keys to scaling up to large, real problems were in applying and adapting several techniques from the literature including: implied and dominance constraints, impactbased variable ordering heuristics, singleton bounds consistency, portfolios, and structurebased decomposition techniques. We experimentally evaluated our optimal scheduler on the SPEC 2000
Reasoning from Last Conflict(s) in Constraint Programming
, 2009
"... Constraint programming is a popular paradigm to deal with combinatorial problems in artificial intelligence. Backtracking algorithms, applied to constraint networks, are commonly used but suffer from thrashing, i.e. the fact of repeatedly exploring similar subtrees during search. An extensive litera ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Constraint programming is a popular paradigm to deal with combinatorial problems in artificial intelligence. Backtracking algorithms, applied to constraint networks, are commonly used but suffer from thrashing, i.e. the fact of repeatedly exploring similar subtrees during search. An extensive literature has been devoted to prevent thrashing, often classified into lookahead (constraint propagation and search heuristics) and lookback (intelligent backtracking and learning) approaches. In this paper, we present an original lookahead approach that allows to guide backtrack search toward sources of conflicts and, as a side effect, to obtain a behavior similar to a backjumping technique. The principle is the following: after each conflict, the last assigned variable is selected in priority, so long as the constraint network cannot be made consistent. This allows us to find, following the current partial instantiation from the leaf to the root of the search tree, the culprit decision that prevents the last variable from being assigned. This way of reasoning can easily be grafted to many variations of backtracking algorithms and represents an original mechanism to reduce thrashing. Moreover, we show that this approach can be generalized so as to collect a (small) set of incompatible variables that are together responsible for the last conflict. Experiments over a wide range of benchmarks demonstrate the effectiveness of this approach in both constraint satisfaction and automated artificial intelligence planning.
Codognet, P.: Prediction of Parallel Speedups for Las Vegas Algorithms
 Proceedings of ICPP2013, 42nd International Conference on Parallel Processing, IEEE
, 2013
"... hal00870979, version 1Abstract—We propose a probabilistic model for the parallel execution of Las Vegas algorithms, i.e. randomized algorithms whose runtime might vary from one execution to another, even with the same input. This model aims at predicting the parallel performances (i.e. speedups) b ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
hal00870979, version 1Abstract—We propose a probabilistic model for the parallel execution of Las Vegas algorithms, i.e. randomized algorithms whose runtime might vary from one execution to another, even with the same input. This model aims at predicting the parallel performances (i.e. speedups) by analysis the runtime distribution of the sequential runs of the algorithm. Then, we study in practice the case of a particular Las Vegas algorithm for combinatorial optimization on three classical problems, and compare the model with an actual parallel implementation up to 256 cores. We show that the prediction can be accurate, matching the actual speedups very well up to 100 parallel cores and then with a deviation of about 20 % up to 256 cores. I.
A Generalized ArcConsistency Algorithm for a Class of Counting Constraints
 PROCEEDINGS OF THE TWENTYSECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
, 2011
"... This paper introduces the SEQ BIN metaconstraint with a polytime algorithm achieving generalized arcconsistency. SEQ BIN can be used for encoding counting constraints such as CHANGE, SMOOTH, or INCREASING NVALUE. For all of them the time and space complexity is linear in the sum of domain sizes, w ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
This paper introduces the SEQ BIN metaconstraint with a polytime algorithm achieving generalized arcconsistency. SEQ BIN can be used for encoding counting constraints such as CHANGE, SMOOTH, or INCREASING NVALUE. For all of them the time and space complexity is linear in the sum of domain sizes, which improves or equals the best known results of the literature.
On Minimal Constraint Networks
, 2011
"... In a minimal binary constraint network, every tuple of a constraint relation can be extended to a solution. It was conjectured that computing a solution to such a network is NP hard. We prove this conjecture. We also prove a conjecture by Dechter and Pearl stating that for k ≥ 2 it is NPhard to de ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
In a minimal binary constraint network, every tuple of a constraint relation can be extended to a solution. It was conjectured that computing a solution to such a network is NP hard. We prove this conjecture. We also prove a conjecture by Dechter and Pearl stating that for k ≥ 2 it is NPhard to decide whether a constraint network can be decomposed into an equivalent kary constraint network, and study related questions.