Results 1  10
of
22
Optimization of simple tabular reduction for table constraints
 In Proceedings of CP’08
, 2008
"... Abstract. Table constraints play an important role within constraint programming. Recently, many schemes or algorithms have been proposed to propagate table constraints or/and to compress their representation. We show that simple tabular reduction (STR), a technique proposed by J. Ullmann to dynamic ..."
Abstract

Cited by 25 (12 self)
 Add to MetaCart
(Show Context)
Abstract. Table constraints play an important role within constraint programming. Recently, many schemes or algorithms have been proposed to propagate table constraints or/and to compress their representation. We show that simple tabular reduction (STR), a technique proposed by J. Ullmann to dynamically maintain the tables of supports, is very often the most efficient practical approach to enforce generalized arc consistency within MAC. We also describe an optimization of STR which allows limiting the number of operations related to validity checking or search of supports. Interestingly enough, this optimization makes STR potentially r times faster where r is the arity of the constraint(s). The results of an extensive experimentation that we have conducted with respect to random and structured instances indicate that the optimized algorithm we propose is usually around twice as fast as the original STR and can be up to one order of magnitude faster than previous stateoftheart algorithms on some series of instances. 1
Reasoning from Last Conflict(s) in Constraint Programming
, 2009
"... Constraint programming is a popular paradigm to deal with combinatorial problems in artificial intelligence. Backtracking algorithms, applied to constraint networks, are commonly used but suffer from thrashing, i.e. the fact of repeatedly exploring similar subtrees during search. An extensive litera ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Constraint programming is a popular paradigm to deal with combinatorial problems in artificial intelligence. Backtracking algorithms, applied to constraint networks, are commonly used but suffer from thrashing, i.e. the fact of repeatedly exploring similar subtrees during search. An extensive literature has been devoted to prevent thrashing, often classified into lookahead (constraint propagation and search heuristics) and lookback (intelligent backtracking and learning) approaches. In this paper, we present an original lookahead approach that allows to guide backtrack search toward sources of conflicts and, as a side effect, to obtain a behavior similar to a backjumping technique. The principle is the following: after each conflict, the last assigned variable is selected in priority, so long as the constraint network cannot be made consistent. This allows us to find, following the current partial instantiation from the leaf to the root of the search tree, the culprit decision that prevents the last variable from being assigned. This way of reasoning can easily be grafted to many variations of backtracking algorithms and represents an original mechanism to reduce thrashing. Moreover, we show that this approach can be generalized so as to collect a (small) set of incompatible variables that are together responsible for the last conflict. Experiments over a wide range of benchmarks demonstrate the effectiveness of this approach in both constraint satisfaction and automated artificial intelligence planning.
NuMVC: An efficient local search algorithm for minimum vertex cover
 J. Artif. Intell. Res. (JAIR
, 2013
"... The Minimum Vertex Cover (MVC) problem is a prominent NPhard combinatorial optimization problem of great importance in both theory and application. Local search has proved successful for this problem. However, there are two main drawbacks in stateoftheart MVC local search algorithms. First, they ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
The Minimum Vertex Cover (MVC) problem is a prominent NPhard combinatorial optimization problem of great importance in both theory and application. Local search has proved successful for this problem. However, there are two main drawbacks in stateoftheart MVC local search algorithms. First, they select a pair of vertices to exchange simultaneously, which is timeconsuming. Secondly, although using edge weighting techniques to diversify the search, these algorithms lack mechanisms for decreasing the weights. To address these issues, we propose two new strategies: twostage exchange and edge weighting with forgetting. The twostage exchange strategy selects two vertices to exchange separately and performs the exchange in two stages. The strategy of edge weighting with forgetting not only increases weights of uncovered edges, but also decreases some weights for each edge periodically. These two strategies are used in designing a new MVC local search algorithm, which is referred to as NuMVC. We conduct extensive experimental studies on the standard benchmarks, namely DIMACS and BHOSLIB. The experiment comparing NuMVC with stateoftheart heuristic algorithms show that NuMVC is at least competitive with the nearest competitor namely PLS on the DIMACS benchmark, and clearly dominates all competitors on the BHOSLIB benchmark. Also, experimental results indicate that NuMVC finds an optimal solution much faster than the current best exact algorithm for Maximum Clique on random instances as well as some structured ones. Moreover, we study the effectiveness of the two strategies and the runtime behaviour through experimental analysis. 1.
Evaluating Hybrid Constraint Tightening for Scheduling Agents
"... Hybrid Scheduling Problems (HSPs) combine temporal and finitedomain variables via hybrid constraints that dictate that specific bounds on temporal constraints rely on assignments to finitedomain variables. Hybrid constraint tightening (HCT) reformulates hybrid constraints to apply the tightest con ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Hybrid Scheduling Problems (HSPs) combine temporal and finitedomain variables via hybrid constraints that dictate that specific bounds on temporal constraints rely on assignments to finitedomain variables. Hybrid constraint tightening (HCT) reformulates hybrid constraints to apply the tightest consistent temporal bound possible, assisting in search space pruning. The contribution of this paper is to empirically evaluate the HCT approach using a stateoftheart Satisfiability Modulo Theory solver on realistic, interesting problems related to developing scheduling agents to assist people with cognitive impairments. We demonstrate that HCT leads to orders of magnitude reduction of search complexity. The success of HCT is enhanced as we apply HCT to hybrid constraints involving increasing numbers of finitedomain variables and finitedomains with increasing size, as well as hybrid constraints expressing increasing temporal precision. We show that while HCT reduces search complexity for all but the simplest problems, the relative effectiveness is dampened on problems with partially conditional temporal constraints and hybrid constraints with increasing temporal disjunctions. Finally, we present our preliminary investigations that indicate that HCT can assist in increasing communication efficacy in a multiagent setting.
An Optimal Filtering Algorithm for Table Constraints
"... Abstract. Filtering algorithms for table constraints are constraintbased, which means that the propagation queue only contains information on the constraints that must be reconsidered. This paper proposes four efficient valuebased algorithms for table constraints, meaning that the propagation queu ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Filtering algorithms for table constraints are constraintbased, which means that the propagation queue only contains information on the constraints that must be reconsidered. This paper proposes four efficient valuebased algorithms for table constraints, meaning that the propagation queue also contains information on the removed values. One of these algorithms (AC5TCTr) is proved to have an optimal time complexity of O(r.t + r.d) per table constraint. Experimental results show that, on structured instances, all our algorithms are two or three times faster than the state of the art STR2+ and MDD c algorithms. 1
Data reductions, fixed parameter tractability, and random weighted dCNF satisfiability
 Artificial Intelligence
"... Data reduction is a key technique in the study of fixed parameter algorithms. In the AI literature, pruning techniques based on simple and efficienttoimplement reduction rules have also played a crucial role in the success of many industrialstrength solvers. Examples include unit propagation in ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Data reduction is a key technique in the study of fixed parameter algorithms. In the AI literature, pruning techniques based on simple and efficienttoimplement reduction rules have also played a crucial role in the success of many industrialstrength solvers. Examples include unit propagation in satisfiability testing, constraint propagation in constraint programming, and heuristic functions in state space search. Understanding the effectiveness and the applicability of data reduction as a technique for designing heuristics for intractable problems has attracted much interest in the fields of AI and algorithmics, and is one of the main motivations in the general interest in the phase transition behavior of randomlygenerated NPcomplete problems. In this paper, we take the initiative to study the power of data reductions in the context of random instances of a generic intractable parameterized problem, the weighted dCNF satisfiability problem. We propose a nontrivial random model for the problem, design and analyze an algorithm that solves the random instances with high probability and in fixed parameter time, establish the exact threshold of the phase transition, and give some analyses on the parametric resolution complexity of unsatisfiable instances. Also discussed is a more general random model and the generalization of the results to this model. To the best knowledge of the author, our algorithm based on simple data reduction rules for the problem and its analysis provide the first sound theoretical evidence on the effectiveness of simple reduction rules applied to intractable parameterized problem.
A feasibilitypreserving local search operator for constrained discrete optimization problems
 In Proc. of the CEC ’08
, 1968
"... Abstract — Metaheuristic optimization approaches are commonly applied to many discrete optimization problems. Many of these optimization approaches are based on a local search operator like, e.g., the mutate or neighbor operator that are used in Evolution Strategies or Simulated Annealing, respecti ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract — Metaheuristic optimization approaches are commonly applied to many discrete optimization problems. Many of these optimization approaches are based on a local search operator like, e.g., the mutate or neighbor operator that are used in Evolution Strategies or Simulated Annealing, respectively. However, the straightforward implementations of these operators tend to deliver infeasible solutions in constrained optimization problems leading to a poor convergence. In this paper, a novel scheme for a local search operator for discrete constrained optimization problems is presented. By using a sophisticated methodology incorporating a backtrackingbased ILP solver, the local search operator preserves the feasibility also on hard constrained problems. In detail, an implementation of the local serach operator as a feasibilitypreserving mutate and neighbor operator is presented. To validate the usability of this approach, scalable discrete constrained testcases are introduced that allow to calculate the expected number of feasible solutions. Thus, the hardness of the testcases can be quantified. Hence, a sound comparison of different optimization methodologies is presented. I.
Empirical study of relational learning algorithms in the phase transition framework, LNCS 5781
, 2009
"... Abstract. Relational Learning (RL) has aroused interest to fill the gap between efficient attributevalue learners and growing applications stored in multirelational databases. However, current systems use generalpurpose problem solvers that do not scaleup well. This is in contrast with the past ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Relational Learning (RL) has aroused interest to fill the gap between efficient attributevalue learners and growing applications stored in multirelational databases. However, current systems use generalpurpose problem solvers that do not scaleup well. This is in contrast with the past decade of success in combinatorics communities where studies of random problems, in the phase transition framework, allowed to evaluate and develop better specialised algorithms able to solve realworld applications up to millions of variables. A number of studies have been proposed in RL, like the analysis of the phase transition of a NPcomplete subproblem, the subsumption test, but none has directly studied the phase transition of RL. As RL, in general, is Σ2 − hard, we propose a first random problem generator, which exhibits the phase transition of its decision version, beyond NP. We study the learning cost of several learners on inherently easy and hard instances, and conclude on expected benefits of this new benchmarking tool for RL. 1
Global Inverse Consistency for Interactive Constraint Satisfaction ⋆
"... Abstract. Some applications require the interactive resolution of a constraint problem by a human user. In such cases, it is highly desirable that the person who interactively solves the problem is not given the choice to select values that do not lead to solutions. We call this property global inve ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Some applications require the interactive resolution of a constraint problem by a human user. In such cases, it is highly desirable that the person who interactively solves the problem is not given the choice to select values that do not lead to solutions. We call this property global inverse consistency. Existing systems simulate this either by maintaining arc consistency after each assignment performed by the user or by compiling offline the problem as a multivalued decision diagram. In this paper, we define several questions related to global inverse consistency and analyse their complexity. Despite their theoretical intractability, we propose several algorithms for enforcing global inverse consistency and we show that the best version is efficient enough to be used in an interactive setting on several configuration and design problems. We finally extend our contribution to the inverse consistency of tuples. 1
Propagating Soft Table Constraints
 18TH INTERNATIONAL CONFERENCE ON PRINCIPLES AND PRACTICE OF CONSTRAINT PROGRAMMING (CP'12), QUÉBEC: CANADA
, 2012
"... WCSP is a framework that has attracted a lot of attention during the last decade. In particular, many filtering approaches have been developed on the concept of equivalencepreserving transformations (cost transfer operations), using the definition of soft local consistencies such as, for example, ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
WCSP is a framework that has attracted a lot of attention during the last decade. In particular, many filtering approaches have been developed on the concept of equivalencepreserving transformations (cost transfer operations), using the definition of soft local consistencies such as, for example, node consistency, arc consistency, full directional arc consistency, and existential directional arc consistency. Almost all algorithms related to these properties have been introduced for binary weighted constraint networks, and most of the conducted experiments typically include networks with binary and ternary constraints only. In this paper, we focus on extensional soft constraints (of large arity), socalled soft table constraints. We propose an algorithm to enforce a soft version of generalized arc consistency (GAC) on such constraints, by combining both the techniques of cost transfer and simple tabular reduction, the latter dynamically maintaining the list of allowed tuples in constraint tables. On various series of problem instances containing soft table constraints of large arity, we show the practical interest of our approach.