Results 1 - 10
of
76
Domain filtering consistencies for non-binary constraints
- ARTIFICIAL INTELLIGENCE
, 2008
"... In non-binary constraint satisfaction problems, the study of local consistencies that only prune values from domains has so far been largely limited to generalized arc consistency or weaker local consistency properties. This is in contrast with binary constraints where numerous such domain filtering ..."
Abstract
-
Cited by 27 (11 self)
- Add to MetaCart
(Show Context)
In non-binary constraint satisfaction problems, the study of local consistencies that only prune values from domains has so far been largely limited to generalized arc consistency or weaker local consistency properties. This is in contrast with binary constraints where numerous such domain filtering consistencies have been proposed. In this paper we present a detailed theoretical, algorithmic and empirical study of domain filtering consistencies for non-binary problems. We study three domain filtering consistencies that are inspired by corresponding variable based domain filtering consistencies for binary problems. These consistencies are stronger than generalized arc consistency, but weaker than pairwise consistency, which is a strong consistency that removes tuples from constraint relations. Among other theoretical results, and contrary to expectations, we prove that these new consistencies do not reduce to the variable based definitions of their counterparts on binary constraints. We propose a number of algorithms to achieve the three consistencies. One of these algorithms has a time complexity comparable to that for generalized arc consistency despite performing more pruning. Experiments demonstrate that our new consistencies are promising as they can be more efficient than generalized arc consistency on certain non-binary problems.
Optimization of simple tabular reduction for table constraints
- In Proceedings of CP’08
, 2008
"... Abstract. Table constraints play an important role within constraint programming. Recently, many schemes or algorithms have been proposed to propagate table constraints or/and to compress their representation. We show that simple tabular reduction (STR), a technique proposed by J. Ullmann to dynamic ..."
Abstract
-
Cited by 25 (11 self)
- Add to MetaCart
(Show Context)
Abstract. Table constraints play an important role within constraint programming. Recently, many schemes or algorithms have been proposed to propagate table constraints or/and to compress their representation. We show that simple tabular reduction (STR), a technique proposed by J. Ullmann to dynamically maintain the tables of supports, is very often the most efficient practical approach to enforce generalized arc consistency within MAC. We also describe an optimization of STR which allows limiting the number of operations related to validity checking or search of supports. Interestingly enough, this optimization makes STR potentially r times faster where r is the arity of the constraint(s). The results of an extensive experimentation that we have conducted with respect to random and structured instances indicate that the optimized algorithm we propose is usually around twice as fast as the original STR and can be up to one order of magnitude faster than previous state-of-the-art algorithms on some series of instances. 1
Qualitative CSP, Finite CSP, and SAT: Comparing Methods for Qualitative Constraint-based Reasoning
"... Qualitative Spatial and Temporal Reasoning (QSR) is concerned with constraint-based formalisms for representing, and reasoning with, spatial and temporal information over infinite domains. Within the QSR community it has been a widely accepted assumption that genuine qualitative reasoning methods ou ..."
Abstract
-
Cited by 11 (3 self)
- Add to MetaCart
Qualitative Spatial and Temporal Reasoning (QSR) is concerned with constraint-based formalisms for representing, and reasoning with, spatial and temporal information over infinite domains. Within the QSR community it has been a widely accepted assumption that genuine qualitative reasoning methods outperform other reasoning methods that are applicable to encodings of qualitative CSP instances. Recently this assumption has been tackled by several authors, who proposed to encode qualitative CSP instances as finite CSP or SAT instances. In this paper we report on the results of a broad empirical study in which we compared the performance of several reasoners on instances from different qualitative formalisms. Our results show that for small-sized qualitative calculi (e.g., Allen’s interval algebra and RCC-8) a state-of-theart implementation of QSR methods currently gives the most efficient performance. However, on recently suggested large-size calculi, e.g., OPRA4, finite CSP encodings provide a considerable performance gain. These results confirm a conjecture by Bessière stating that support-based constraint propagation algorithms provide better performance for large-sized qualitative calculi. 1
Transforming Attribute and Cloneenabled Feature Models into Constraint Programs over Finite Domains
, 2011
"... Abstract: Product line models are important artefacts in product line engineering. One of the most popular languages to model the variability of a product line is the feature notation. Since the initial proposal of feature models in 1990, the notation has evolved in different aspects. One of the mos ..."
Abstract
-
Cited by 10 (7 self)
- Add to MetaCart
(Show Context)
Abstract: Product line models are important artefacts in product line engineering. One of the most popular languages to model the variability of a product line is the feature notation. Since the initial proposal of feature models in 1990, the notation has evolved in different aspects. One of the most important improvements allows specify the number of instances that a feature can have in a particular product. This improvement implies an important increase on the number of variables needed to represent a feature model. Another improvement consists in allowing features to have attributes, which can take values on a different domain than the boolean one. These two extensions have increased the complexity of feature models and therefore have made more difficult the manually or even automated reasoning on feature models. To the best of our knowledge, very few works exist in literature to address this problem. In this paper we show that reasoning on extended feature models is easy and scalable by using constraint programming over integer domains. The aim of the paper is double (a) to show the rules for transforming extended feature models into constraint programs, and (b) to demonstrate, by means of 11 reasoning operations over feature models, the usefulness and benefits of our approach. We evaluated our approach by transforming 60 feature models of sizes up to 2000 features and by comparing it with 2 other approaches available in the literature. The evaluation showed that our approach is correct, useful and scalable to industry size models. 1
Kernels for Global Constraints
- PROCEEDINGS OF THE TWENTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
, 2011
"... Bessière et al. (AAAI’08) showed that several intractable global constraints can be efficiently propagated when certain natural problem parameters are small. In particular, the complete propagation of a global constraint is fixed-parameter tractable in k – the number of holes in domains – whenever b ..."
Abstract
-
Cited by 8 (6 self)
- Add to MetaCart
Bessière et al. (AAAI’08) showed that several intractable global constraints can be efficiently propagated when certain natural problem parameters are small. In particular, the complete propagation of a global constraint is fixed-parameter tractable in k – the number of holes in domains – whenever bound consistency can be enforced in polynomial time; this applies to the global constraints ATMOST-NVALUE and EX-TENDED GLOBAL CARDINALITY (EGC). In this paper we extend this line of research and introduce the concept of reduction to a problem kernel, a key concept of parameterized complexity, to the field of global constraints. In particular, we show that the consistency problem for ATMOST-NVALUE constraints admits a linear time reduction to an equivalent instance on O(k2) variables and domain values. This small kernel can be used to speed up the complete propagation of NVALUE constraints. We contrast this result by showing that the consistency problem for EGC constraints does not admit a reduction to a polynomial problem kernel unless the polynomial hierarchy collapses.
An Application of Constraint Programming to Superblock Instruction Scheduling
"... Abstract. Modern computer architectures have complex features that can only be fully taken advantage of if the compiler schedules the compiled code. A standard region of code for scheduling in an optimizing compiler is called a superblock. Scheduling superblocks optimally is known to be NP-complete, ..."
Abstract
-
Cited by 8 (3 self)
- Add to MetaCart
(Show Context)
Abstract. Modern computer architectures have complex features that can only be fully taken advantage of if the compiler schedules the compiled code. A standard region of code for scheduling in an optimizing compiler is called a superblock. Scheduling superblocks optimally is known to be NP-complete, and production compilers use non-optimal heuristic algorithms. In this paper, we present an application of constraint programming to the superblock instruction scheduling problem. The resulting system is both optimal and fast enough to be incorporated into production compilers, and is the first optimal superblock scheduler for realistic architectures. In developing our optimal scheduler, the keys to scaling up to large, real problems were in applying and adapting several techniques from the literature including: implied and dominance constraints, impact-based variable ordering heuristics, singleton bounds consistency, portfolios, and structure-based decomposition techniques. We experimentally evaluated our optimal scheduler on the SPEC 2000
On Minimal Constraint Networks
, 2011
"... In a minimal binary constraint network, every tuple of a constraint relation can be extended to a solution. It was conjectured that computing a solution to such a network is NP hard. We prove this conjecture. We also prove a conjecture by Dechter and Pearl stating that for k ≥ 2 it is NP-hard to de ..."
Abstract
-
Cited by 8 (1 self)
- Add to MetaCart
(Show Context)
In a minimal binary constraint network, every tuple of a constraint relation can be extended to a solution. It was conjectured that computing a solution to such a network is NP hard. We prove this conjecture. We also prove a conjecture by Dechter and Pearl stating that for k ≥ 2 it is NP-hard to decide whether a constraint network can be decomposed into an equivalent k-ary constraint network, and study related questions.
Reasoning from Last Conflict(s) in Constraint Programming
, 2009
"... Constraint programming is a popular paradigm to deal with combinatorial problems in artificial intelligence. Backtracking algorithms, applied to constraint networks, are commonly used but suffer from thrashing, i.e. the fact of repeatedly exploring similar subtrees during search. An extensive litera ..."
Abstract
-
Cited by 7 (1 self)
- Add to MetaCart
(Show Context)
Constraint programming is a popular paradigm to deal with combinatorial problems in artificial intelligence. Backtracking algorithms, applied to constraint networks, are commonly used but suffer from thrashing, i.e. the fact of repeatedly exploring similar subtrees during search. An extensive literature has been devoted to prevent thrashing, often classified into look-ahead (constraint propagation and search heuristics) and lookback (intelligent backtracking and learning) approaches. In this paper, we present an original look-ahead approach that allows to guide backtrack search toward sources of conflicts and, as a side effect, to obtain a behavior similar to a backjumping technique. The principle is the following: after each conflict, the last assigned variable is selected in priority, so long as the constraint network cannot be made consistent. This allows us to find, following the current partial instantiation from the leaf to the root of the search tree, the culprit decision that prevents the last variable from being assigned. This way of reasoning can easily be grafted to many variations of backtracking algorithms and represents an original mechanism to reduce thrashing. Moreover, we show that this approach can be generalized so as to collect a (small) set of incompatible variables that are together responsible for the last conflict. Experiments over a wide range of benchmarks demonstrate the effectiveness of this approach in both constraint satisfaction and automated artificial intelligence planning.
SiGMa: Simple Greedy Matching for Aligning Large Knowledge Bases
"... The Internet has enabled the creation of a growing number of large-scale knowledge bases in a variety of domains containing complementary information. Tools for automatically aligning these knowledge bases would make it possible to unify many sources of structured knowledge and answer complex querie ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
(Show Context)
The Internet has enabled the creation of a growing number of large-scale knowledge bases in a variety of domains containing complementary information. Tools for automatically aligning these knowledge bases would make it possible to unify many sources of structured knowledge and answer complex queries. However, the efficient alignment of large-scale knowledge bases still poses a considerable challenge. Here, we present Simple Greedy Matching (SiGMa), a simple algorithm for aligning knowledge bases with millions of entities and facts. SiGMa is an iterative propagation algorithm which leverages both the structural information from the relationship graph as well as flexible similarity measures between entity properties in a greedy local search, thus making it scalable. Despite its greedy nature, our experiments indicate that SiGMa can efficiently match some of the world’s largest knowledge bases with high accuracy. We provide additional experiments on benchmark datasets which demonstrate that SiGMa can outperform state-of-the-art approaches both in accuracy and efficiency. 1
Solving a Telecommunications Feature Subscription Configuration Problem
- In CP
, 2008
"... Abstract. Call control features (e.g., call-divert, voice-mail) are primitive options to which users can subscribe off-line to personalise their service. The configuration of a feature subscription involves choosing and sequencing features from a catalogue and is subject to constraints that prevent ..."
Abstract
-
Cited by 5 (3 self)
- Add to MetaCart
(Show Context)
Abstract. Call control features (e.g., call-divert, voice-mail) are primitive options to which users can subscribe off-line to personalise their service. The configuration of a feature subscription involves choosing and sequencing features from a catalogue and is subject to constraints that prevent undesirable feature interactions at run-time. When the subscription requested by a user is inconsistent, one problem is to find an optimal relaxation. In this paper, we show that this problem is NP-hard and we present a constraint programming formulation using the variable weighted constraint satisfaction problem framework. We also present simple formulations using partial weighted maximum satisfiability and integer linear programming. We experimentally compare our formulations of the different approaches; the results suggest that our constraint programming approach is the best of the three overall. 1