Results 1 - 10
of
380
Adaptive Constraint Satisfaction
- WORKSHOP OF THE UK PLANNING AND SCHEDULING
, 1996
"... Many different approaches have been applied to constraint satisfaction. These range from complete backtracking algorithms to sophisticated distributed configurations. However, most research effort in the field of constraint satisfaction algorithms has concentrated on the use of a single algorithm fo ..."
Abstract
-
Cited by 952 (43 self)
- Add to MetaCart
Many different approaches have been applied to constraint satisfaction. These range from complete backtracking algorithms to sophisticated distributed configurations. However, most research effort in the field of constraint satisfaction algorithms has concentrated on the use of a single algorithm for solving all problems. At the same time, a consensus appears to have developed to the effect that it is unlikely that any single algorithm is always the best choice for all classes of problem. In this paper we argue that an adaptive approach should play an important part in constraint satisfaction. This approach relaxes the commitment to using a single algorithm once search commences. As a result, we claim that it is possible to undertake a more focused approach to problem solving, allowing for the correction of bad algorithm choices and for capitalising on opportunities for gain by dynamically changing to more suitable candidates.
Algorithms for Constraint-Satisfaction Problems: A Survey
, 1992
"... A large number of problems in AI and other areas of computer science can be viewed as special cases of the constraint-satisfaction problem. Some examples are machine vision, belief maintenance, scheduling, temporal reasoning, graph problems, floor plan design, the planning of genetic experiments, an ..."
Abstract
-
Cited by 449 (0 self)
- Add to MetaCart
(Show Context)
A large number of problems in AI and other areas of computer science can be viewed as special cases of the constraint-satisfaction problem. Some examples are machine vision, belief maintenance, scheduling, temporal reasoning, graph problems, floor plan design, the planning of genetic experiments, and the satisfiability problem. A number of different approaches have been developed for solving these problems. Some of them use constraint propagation to simplify the original problem. Others use backtracking to directly search for possible solutions. Some are a combination of these two techniques. This article overviews many of these approaches in a tutorial fashion.
Methods for Task Allocation Via Agent Coalition Formation
, 1998
"... Task execution in multi-agent environments may require cooperation among agents. Given a set of agents and a set of tasks which they have to satisfy, we consider situations where each task should be attached to a group of agents that will perform the task. Task allocation to groups of agents is nece ..."
Abstract
-
Cited by 364 (21 self)
- Add to MetaCart
Task execution in multi-agent environments may require cooperation among agents. Given a set of agents and a set of tasks which they have to satisfy, we consider situations where each task should be attached to a group of agents that will perform the task. Task allocation to groups of agents is necessary when tasks cannot be performed by a single agent. However it may also be beneficial when groups perform more efficiently with respect to the single agents' performance. In this paper we present several solutions to the problem of task allocation among autonomous agents, and suggest that the agents form coalitions in order to perform tasks or improve the efficiency of their performance. We present efficient distributed algorithms with low ratio bounds and with low computational complexities. These properties are proven theoretically and supported by simulations and an implementation in an agent system. Our methods are based on both the algorithmic aspects of combinatorics and approximat...
Using CSP look-back techniques to solve real-world SAT instances
, 1997
"... We report on the performance of an enhanced version of the “Davis-Putnam ” (DP) proof procedure for propositional satisfiability (SAT) on large instances derived from realworld problems in planning, scheduling, and circuit diagnosis and synthesis. Our results show that incorporating CSP lookback tec ..."
Abstract
-
Cited by 232 (1 self)
- Add to MetaCart
We report on the performance of an enhanced version of the “Davis-Putnam ” (DP) proof procedure for propositional satisfiability (SAT) on large instances derived from realworld problems in planning, scheduling, and circuit diagnosis and synthesis. Our results show that incorporating CSP lookback techniques-- especially the relatively new technique of relevance-bounded learning-- renders easy many problems which otherwise are beyond DP’s reach. Frequently they make DP, a systematic algorithm, perform as well or better than stochastic SAT algorithms such as GSAT or WSAT. We recommend that such techniques be included as options in implementations of DP, just as they are in systematic algorithms for the more general constraint satisfaction problem.
Heavy-Tailed Phenomena in Satisfiability and Constraint Satisfaction Problems
- J. of Autom. Reasoning
, 2000
"... Abstract. We study the runtime distributions of backtrack procedures for propositional satisfiability and constraint satisfaction. Such procedures often exhibit a large variability in performance. Our study reveals some intriguing properties of such distributions: They are often characterized by ver ..."
Abstract
-
Cited by 165 (27 self)
- Add to MetaCart
(Show Context)
Abstract. We study the runtime distributions of backtrack procedures for propositional satisfiability and constraint satisfaction. Such procedures often exhibit a large variability in performance. Our study reveals some intriguing properties of such distributions: They are often characterized by very long tails or “heavy tails”. We will show that these distributions are best characterized by a general class of distributions that can have infinite moments (i.e., an infinite mean, variance, etc.). Such nonstandard distributions have recently been observed in areas as diverse as economics, statistical physics, and geophysics. They are closely related to fractal phenomena, whose study was introduced by Mandelbrot. We also show how random restarts can effectively eliminate heavy-tailed behavior. Furthermore, for harder problem instances, we observe long tails on the left-hand side of the distribution, which is indicative of a non-negligible fraction of relatively short, successful runs. A rapid restart strategy eliminates heavy-tailed behavior and takes advantage of short runs, significantly reducing expected solution time. We demonstrate speedups of up to two orders of magnitude on SAT and CSP encodings of hard problems in planning, scheduling, and circuit synthesis. Key words: satisfiability, constraint satisfaction, heavy tails, backtracking 1.
The Quest for Efficient Boolean Satisfiability Solvers
, 2002
"... has seen much interest in not just the theoretical computer science community, but also in areas where practical solutions to this problem enable significant practical applications. Since the first development of the basic search based algorithm proposed by Davis, Putnam, Logemann and Loveland (DPLL ..."
Abstract
-
Cited by 149 (3 self)
- Add to MetaCart
(Show Context)
has seen much interest in not just the theoretical computer science community, but also in areas where practical solutions to this problem enable significant practical applications. Since the first development of the basic search based algorithm proposed by Davis, Putnam, Logemann and Loveland (DPLL) about forty years ago, this area has seen active research effort with many interesting contributions that have culminated in state-of-the-art SAT solvers today being able to handle problem instances with thousands, and in same cases even millions, of variables. In this paper we examine some of the main ideas along this passage that have led to our current capabilities. Given the depth of the literature in this field, it is impossible to do this in any comprehensive way; rather we focus on techniques with consistent demonstrated efficiency in available solvers. For the most part, we focus on techniques within the basic DPLL search framework, but also briefly describe other approaches and look at some possible future research directions. 1.
Locating the Phase Transition in Binary Constraint Satisfaction Problems
- Artificial Intelligence
, 1994
"... The phase transition in binary constraint satisfaction problems, i.e. the transition from a region in which almost all problems have many solutions to a region in which almost all problems have no solutions, as the constraints become tighter, is investigated by examining the behaviour of samples of ..."
Abstract
-
Cited by 135 (4 self)
- Add to MetaCart
(Show Context)
The phase transition in binary constraint satisfaction problems, i.e. the transition from a region in which almost all problems have many solutions to a region in which almost all problems have no solutions, as the constraints become tighter, is investigated by examining the behaviour of samples of randomly-generated problems. In contrast to theoretical work, which is concerned with the asymptotic behaviour of problems as the number of variables becomes larger, this paper is concerned with the location of the phase transition in finite problems. The accuracy of a prediction based on the expected number of solutions is discussed; it is shown that the variance of the number of solutions can be used to set bounds on the phase transition and to indicate the accuracy of the prediction. A class of sparse problems, for which the prediction is known to be inaccurate, is considered in detail; it is shown that, for these problems, the phase transition depends on the topology of the constraint gr...
The Constrainedness of Search
- In Proceedings of AAAI-96
, 1999
"... We propose a definition of `constrainedness' that unifies two of the most common but informal uses of the term. These are that branching heuristics in search algorithms often try to make the most "constrained" choice, and that hard search problems tend to be "critically constrain ..."
Abstract
-
Cited by 128 (29 self)
- Add to MetaCart
(Show Context)
We propose a definition of `constrainedness' that unifies two of the most common but informal uses of the term. These are that branching heuristics in search algorithms often try to make the most "constrained" choice, and that hard search problems tend to be "critically constrained". Our definition of constrainedness generalizes a number of parameters used to study phase transition behaviour in a wide variety of problem domains. As well as predicting the location of phase transitions in solubility, constrainedness provides insight into why problems at phase transitions tend to be hard to solve. Such problems are on a constrainedness "knife-edge", and we must search deep into the problem before they look more or less soluble. Heuristics that try to get off this knife-edge as quickly as possible by, for example, minimizing the constrainedness are often very effective. We show that heuristics from a wide variety of problem domains can be seen as minimizing the constrainedness (or proxies ...
QuickXplain: preferred explanations and relaxations for over-constrained problems
- In Proceedings of AAAI’04
, 2004
"... Over-constrained problems can have an exponential number of conflicts, which explain the failure, and an exponential number of relaxations, which restore the consistency. A user of an interactive application, however, desires explanations and relaxations containing the most important constraints. To ..."
Abstract
-
Cited by 124 (1 self)
- Add to MetaCart
(Show Context)
Over-constrained problems can have an exponential number of conflicts, which explain the failure, and an exponential number of relaxations, which restore the consistency. A user of an interactive application, however, desires explanations and relaxations containing the most important constraints. To address this need, we define preferred explanations and relaxations based on user preferences between constraints and we compute them by a generic method which works for arbitrary CP, SAT, or DL solvers. We significantly accelerate the basic method by a divide-and-conquer strategy and thus provide the technological basis for the explanation facility of a principal industrial constraint programming tool, which is, for example, used in numerous configuration applications.
A Theoretical Evaluation of Selected Backtracking Algorithms
- Artificial Intelligence
, 1997
"... In recent years, many new backtracking algorithms for solving constraint satisfaction problems have been proposed. The algorithms are usually evaluated by empirical testing. This method, however, has its limitations. Our paper adopts a di erent, purely theoretical approach, which is based on charact ..."
Abstract
-
Cited by 124 (3 self)
- Add to MetaCart
In recent years, many new backtracking algorithms for solving constraint satisfaction problems have been proposed. The algorithms are usually evaluated by empirical testing. This method, however, has its limitations. Our paper adopts a di erent, purely theoretical approach, which is based on characterizations of the sets of search treenodes visited by the backtracking algorithms. A notion of inconsistency between instantiations and variables is introduced, and is shown to be a useful tool for characterizing such well-known concepts as backtrack, backjump, and domain annihilation. The characterizations enable us to: (a) prove the correctness of the algorithms, and (b) partially order the algorithms according to two standard performance measures: the number of nodes visited, and the number of consistency checks performed. Among other results, we prove the correctness of Backjumping and Con ict-Directed Backjumping, and show that Forward Checking never visits more nodes than Backjumping. Our approach leads us also to propose a modi cation to two hybrid backtracking algorithms, Backmarking with Backjumping (BMJ) and Backmarking with Con ict-Directed Backjumping (BM-CBJ), so that they always perform fewer consistency checks than the original algorithms. 1