Results 1  10
of
10
Ants can solve Constraint Satisfaction Problems
 IEEE Transactions on Evolutionary Computation
, 2001
"... In this paper we describe a new incomplete approach for solving constraint satisfaction problems (CSPs) based on the ant colony optimization (ACO) metaheuristic. The idea is to use artificial ants to keep track of promising areas of the search space by laying trails of pheromone. This pheromone info ..."
Abstract

Cited by 32 (11 self)
 Add to MetaCart
(Show Context)
In this paper we describe a new incomplete approach for solving constraint satisfaction problems (CSPs) based on the ant colony optimization (ACO) metaheuristic. The idea is to use artificial ants to keep track of promising areas of the search space by laying trails of pheromone. This pheromone information is used to guide the search, as a heuristic for choosing values to be assigned to variables.
Measuring the searched space to guide efficiency: The principle and evidence on constraint satisfaction
 Proceedings of the 7th International Conference on Parallel Problem Solving from Nature, number 2439 in LNCS
, 2002
"... Abstract. In this paper we present a new tool to measure the efficiency of evolutionary algorithms by storing the whole searched space of a run, a process whereby we gain insight into the number of distinct points in the state space an algorithm has visited as opposed to the number of function evalu ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we present a new tool to measure the efficiency of evolutionary algorithms by storing the whole searched space of a run, a process whereby we gain insight into the number of distinct points in the state space an algorithm has visited as opposed to the number of function evaluations done within the run. This investigation demonstrates a certain inefficiency of the classical mutation operator with mutationrate 1/l, where l is the dimension of the state space. Furthermore we present a model for predicting this inefficiency and verify it empirically using the new tool on binary constraint satisfaction problems. 1
Hyperheuristics for the Dynamic Variable Ordering in Constraint Satisfaction Problems
"... The idea behind hyperheuristics is to discover some combination of straightforward heuristics to solve a wide range of problems. To be worthwhile, such combination should outperform the single heuristics. This paper presents a GAbased method that produces general hyperheuristics for the dynamic va ..."
Abstract

Cited by 8 (7 self)
 Add to MetaCart
(Show Context)
The idea behind hyperheuristics is to discover some combination of straightforward heuristics to solve a wide range of problems. To be worthwhile, such combination should outperform the single heuristics. This paper presents a GAbased method that produces general hyperheuristics for the dynamic variable ordering within Constraint Satisfaction Problems. The GA uses a variablelength representation, which evolves combinations of conditionaction rules producing hyperheuristics after going through a learning process which includes training and testing phases. Such hyperheuristics, when tested with a large set of benchmark problems, produce encouraging results for most of the cases. The testebed is composed of problems randomly generated using an algorithm proposed by Prosser [17].
Combining Local Search and Fitness Function Adaptation in a GA for Solving Binary Constraint Satisfaction Problems
 Proceedings of Genetic and Evolutionary Computation Conference
, 2000
"... this paper we investigate the eectiveness of GAs that combine these two methods, more speci  cally, whether the use of a more involved tness function improves the performance of GLS algorithms for random BCSPs. GLS has been recently used in a hybrid GA [3]; at each generation, the osprings produc ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
this paper we investigate the eectiveness of GAs that combine these two methods, more speci  cally, whether the use of a more involved tness function improves the performance of GLS algorithms for random BCSPs. GLS has been recently used in a hybrid GA [3]; at each generation, the osprings produced by the application of genetic operators are improved by means of a local search procedure. Next, we replace the tness function with the tness function from SAWing [2] and de ne an adaptive cost function (resulting in GLS+SAW). We conduct extensive experiments on a large set of standard benchmark instances of random BCSPs. Binary constraint satisfaction problems are de ned by having a set of variables, where each variable has a domain of values, and a set of constraints acting between pairs of variables. A solution of a BCSP is an assignment of values to the variables in such a way that all restrictions imposed by the constraints are satis ed. In this paper we use randomly generated BCSPs that can be de ned by four parameters: the number of variables n, the (uniform) domainsize m, the probability of a constraint between two variables d (density), and the probablity of a con ict between two values of a constraint t (tightness). The results indicate that the addition of the SAWing method does not deteriorate the succes rate (percentage of runs that nd a solution, SR) of GLS, while it decreases the average number of tness evaluations (AES) for some classes of problems. When comparing GLS+SAW with one of the best GA based algorithms, Microgenetic Iterative Descent Method Genetic Algorithm (MIDA) [1], we found that GLS+SAW is slightly better in both SR and AES
Using hyperheuristics for the dynamic variable ordering in binary constraint satisfaction problems
 in 7th Mexican International Conference on Artificial Intelligence, Lecture Notes in Computer Science
, 2008
"... Abstract. The idea behind hyperheuristics is to discover some combination of straightforward heuristics to solve a wide range of problems. To be worthwhile, such combination should outperform the single heuristics. This paper presents a GAbased method that produces general hyperheuristics for th ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The idea behind hyperheuristics is to discover some combination of straightforward heuristics to solve a wide range of problems. To be worthwhile, such combination should outperform the single heuristics. This paper presents a GAbased method that produces general hyperheuristics for the dynamic variable ordering within Constraint Satisfaction Problems. The GA uses a variablelength representation, which evolves combinations of conditionaction rules producing hyperheuristics after going through a learning process which includes training and testing phases. Such hyperheuristics, when tested with a large set of benchmark problems, produce encouraging results for most of the cases. There are instances of CSP that are harder to be solved than others, this due to the constraint and the conflict density [4]. The testebed is composed of hard problems randomly generated by an algorithm proposed by Prosser [18]. 1
How to Handle Constraints with Evolutionary Algorithms
 Chapmann & Hall/CRC Press, Ch 10
, 2001
"... In this paper we describe evolutionary algorithms (EAs) for constraint handling. Constraint handling is not straightforward in an EA because the search operators mutation and recombination are `blind' to constraints. Hence, there is no guarantee that if the parents satisfy some constraints the ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
In this paper we describe evolutionary algorithms (EAs) for constraint handling. Constraint handling is not straightforward in an EA because the search operators mutation and recombination are `blind' to constraints. Hence, there is no guarantee that if the parents satisfy some constraints the offspring will satisfy them as well. This suggests that the presence of constraints in a problem makes EAs intrinsically unsuited to solve this problem. This should especially hold when the problem does not contain an objective function to be optimised, but only constraints  the category of constraint satisfaction problems.
Hybrid Evolutionary Algorithms for Constraint Satisfaction Problems:
"... Abstract We study a selected group of hybrid EAs for solving CSPs, consisting of the best performing EAs from the literature. We investigate the contribution of the evolutionary component to their performance by comparing the hybrid EAs with their “deevolutionarised ” variants. The experiments sho ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract We study a selected group of hybrid EAs for solving CSPs, consisting of the best performing EAs from the literature. We investigate the contribution of the evolutionary component to their performance by comparing the hybrid EAs with their “deevolutionarised ” variants. The experiments show that “deevolutionarising ” can increase performance, in some cases doubling it. Considering that the problem domain and the algorithms are arbitrarily selected from the “memetic niche”, it seems likely that the same effect occurs for other problems and algorithms. Therefore, our conclusion is that after designing and building a memetic algorithm, one should perform a verification by comparing this algorithm with its “deevolutionarised” variant. 1
Hyperheuristics for the Dynamic Variable Ordering in Constraint Satisfaction Problems
"... The idea behind hyperheuristics is to discover some combination of straightforward heuristics to solve a wide range of problems. To be worthwhile, such combination should outperform the single heuristics. This paper presents a GAbased method that produces general hyperheuristics for the dynamic ..."
Abstract
 Add to MetaCart
(Show Context)
The idea behind hyperheuristics is to discover some combination of straightforward heuristics to solve a wide range of problems. To be worthwhile, such combination should outperform the single heuristics. This paper presents a GAbased method that produces general hyperheuristics for the dynamic variable ordering within Constraint Satisfaction Problems. The GA uses a variablelength representation, which evolves combinations of conditionaction rules producing hyperheuristics after going through a learning process which includes training and testing phases. Such hyperheuristics, when tested with a large set of benchmark problems, produce encouraging results for most of the cases. The testebed is composed of problems randomly generated using an algorithm proposed by Prosser [17].
TOWARDS HYBRID METHODS FOR SOLVING HARD COMBINATORIAL OPTIMIZATION PROBLEMS
, 2006
"... in my opinion, it ..."
(Show Context)