Results 1  10
of
33
Solving binary constraint satisfaction problems using evolutionary algorithms with an adaptive tness function
 In Eiben et al
"... Abstract. This paper presents a comparative study of Evolutionary Algorithms (EAs) for Constraint Satisfaction Problems (CSPs). We focus on EAs where fitness is based on penalization of constraint violations and the penalties are adapted during the execution. Three different EAs based on this approa ..."
Abstract

Cited by 32 (14 self)
 Add to MetaCart
(Show Context)
Abstract. This paper presents a comparative study of Evolutionary Algorithms (EAs) for Constraint Satisfaction Problems (CSPs). We focus on EAs where fitness is based on penalization of constraint violations and the penalties are adapted during the execution. Three different EAs based on this approach are implemented. For highly connected constraint networks, the results provide further empirical support to the theoretical prediction of the phase transition in binary CSPs. 1
Searching for Maximum Cliques with Ant Colony Optimization
 In Applications of Evolutionary Computing, Proceedings of EvoWorkshops 2003: EvoCOP, EvoIASP, EvoSTim, volume 2611 of lncs
, 2003
"... In this paper, we investigate the capabilities of Ant Colony Optimization (ACO) for solving the maximum clique problem. We describe AntClique, an algorithm that successively generates maximal cliques through the repeated addition of vertices into partial cliques. ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
(Show Context)
In this paper, we investigate the capabilities of Ant Colony Optimization (ACO) for solving the maximum clique problem. We describe AntClique, an algorithm that successively generates maximal cliques through the repeated addition of vertices into partial cliques.
A study of aco capabilities for solving the maximum clique problem
 Journal of Heuristics
, 2006
"... This paper investigates the capabilities of the Ant Colony Optimization (ACO) metaheuristic for solving the maximum clique problem, the goal of which is to nd a largest set of pairwise adjacent vertices in a graph. We propose and compare two dierent instantiations of a generic ACO algorithm for th ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
(Show Context)
This paper investigates the capabilities of the Ant Colony Optimization (ACO) metaheuristic for solving the maximum clique problem, the goal of which is to nd a largest set of pairwise adjacent vertices in a graph. We propose and compare two dierent instantiations of a generic ACO algorithm for this problem. Basically, the generic ACO algorithm successively generates maximal cliques through the repeated addition of vertices into partial cliques, and uses \pheromone trails " as a greedy heuristic to choose, at each step, the next vertex to enter the clique. The two instantiations dier in the way pheromone trails are laid and exploited, i.e., on edges or on vertices of the graph. We illustrate the behavior of the two ACO instantiations on a representative benchmark instance and we study the impact of pheromone on the solution process. We consider two measures the resampling and the dispersion ratio  for providing an insight into the performance at run time. We also study the benet of integrating a local search procedure within the proposed ACO algorithm, and we show that this improves the solution process. Finally, we compare ACO performance with that of three other representative heuristic approaches, showing that the former obtains competitive results. 1
A Genetic Algorithm for Searching Spatial Configurations
 IEEE Transactions on Evolutionary Computation
"... Searching spatial configurations is a particular case of maximal constraint satisfaction problems, where constraints expressed by spatial and nonspatial properties guide the search process. In the spatial domain, binary spatial relations are typically used for specifying constraints while searchin ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Searching spatial configurations is a particular case of maximal constraint satisfaction problems, where constraints expressed by spatial and nonspatial properties guide the search process. In the spatial domain, binary spatial relations are typically used for specifying constraints while searching spatial configurations. Searching configurations is particularly intractable when configurations are derived from a combination of objects, which involves a hard combinatorial problem. This paper presents a genetic algorithm that combines a direct and an indirect approach to treating binary constraints in genetic operators. A new genetic operator combines randomness and heuristics for guiding the reproduction of new individuals in a population. Individuals are composed of spatial objects whose relationships are indexed by a content measure. This paper describes the genetic algorithm and presents experimental results that compare the genetic versus a deterministic and a localsearch algorithm. These experiments show the convenience of using a genetic algorithm when the complexity of the queries and databases do no guarantee the tractability of a deterministic strategy. Index Terms: Evolutionary computation, genetic algorithm, geographic information systems, constraint satisfaction problems, information retrieval.
Boosting ACO with a Preprocessing Step
 Applications of Evolutionary Computing, Proceedings of EvoWorkshops2002: EvoCOP, EvoIASP, EvoSTim
, 2002
"... When solving a combinatorial optimization problem with the Ant Colony Optimization (ACO) metaheuristic, one usually has to nd a compromise between guiding or diversifying the search. Indeed, ACO uses pheromone to attract ants. When increasing the sensibility of ants to pheromone, they converge ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
When solving a combinatorial optimization problem with the Ant Colony Optimization (ACO) metaheuristic, one usually has to nd a compromise between guiding or diversifying the search. Indeed, ACO uses pheromone to attract ants. When increasing the sensibility of ants to pheromone, they converge quicker towards a solution but, as a counterpart, they usually nd worse solutions. In this paper, we rst study the inuence of ACO parameters on the exploratory ability of ants. We then study the evolution of the impact of pheromone during the solution process with respect to its cost's management. We nally propose to introduce a preprocessing step that actually favors a larger exploration of the search space at the beginning of the search at low cost. We illustrate our approach on AntSolver, an ACO algorithm that has been designed to solve Constraint Satisfaction Problems, and we show on random binary problems that it allows to nd better solutions more than twice quicker.
Integration of ACO in a Constraint Programming Language
"... Abstract. We propose to integrate ACO in a Constraint Programming (CP) language. Basically, we use the CP language to describe the problem to solve by means of constraints and we use the CP propagation engine to reduce the search space and check constraint satisfaction; however, the classical backtr ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract. We propose to integrate ACO in a Constraint Programming (CP) language. Basically, we use the CP language to describe the problem to solve by means of constraints and we use the CP propagation engine to reduce the search space and check constraint satisfaction; however, the classical backtrack search of CP is replaced by an ACO search. We report first experimental results on the car sequencing problem and compare different pheromone strategies for this problem. 1
Local Search Methods
, 2006
"... Local search is one of the fundamental paradigms for solving computationally hard combinatorial problems, including the constraint satisfaction problem (CSP). It provides the basis for some of the most successful and versatile methods for solving the large and difficult problem instances encountered ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Local search is one of the fundamental paradigms for solving computationally hard combinatorial problems, including the constraint satisfaction problem (CSP). It provides the basis for some of the most successful and versatile methods for solving the large and difficult problem instances encountered in many reallife applications. Despite impressive advances in systematic, complete search algorithms, local search methods in many cases represent the only feasible way for solving these large and complex instances. Local search algorithms are also naturally suited for dealing with the optimisation criteria arising in many practical applications. The basic idea underlying local search is to start with a randomly or heuristically generated candidate solution of a given problem instance, which may be infeasible, suboptimal or incomplete, and to iteratively improve this candidate solution by means of typically minor modifications. Different local search methods vary in the way in which improvements are achieved, and in particular, in the way in which situations are handled in which no direct improvement is possible. Most local search methods use randomisation to ensure that the search process does not
Design of experiments for the tuning of optimisation algorithms
, 2007
"... This thesis presents a set of rigorous methodologies for tuning the performance of algorithms that solve optimisation problems. Many optimisation problems are difficult and timeconsuming to solve exactly. An alternative is to use an approximate algorithm that solves the problem to an acceptable lev ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
This thesis presents a set of rigorous methodologies for tuning the performance of algorithms that solve optimisation problems. Many optimisation problems are difficult and timeconsuming to solve exactly. An alternative is to use an approximate algorithm that solves the problem to an acceptable level of quality and provides such a solution in a reasonable time. Using optimisation algorithms typically requires choosing the settings of tuning parameters that adjust algorithm performance subject to this compromise between solution quality and running time. This is the parameter tuning problem. This thesis demonstrates that the Design Of Experiments (DOE) approach can be adapted to successfully address the parameter tuning problem for algorithms that find approximate solutions to optimisation problems. The thesis introduces experiment designs and analyses for (1) determining the problem characteristics affecting algorithm performance (2) screening and ranking the most important tuning parameters and problem characteristics and (3) tuning algorithm parameters to maximise algorithm performance for a given problem instance. Desirability functions
The DynCOAA Algorithm for Dynamic Constraint Optimization Problems
 WIRTSCHAFTSINFORMATIK
, 2006
"... research, manufacturing control and others can be transformed in constraint optimization problems (COPs). Moreover, most practical problems change constantly, requiring algorithms that can handle dynamic problems. When these problems are situated in a distributed setting, distributed algorithms are ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
research, manufacturing control and others can be transformed in constraint optimization problems (COPs). Moreover, most practical problems change constantly, requiring algorithms that can handle dynamic problems. When these problems are situated in a distributed setting, distributed algorithms are preferred or even necessary.
An ACObased Reactive Framework for Ant Colony Optimization: First Experiments on Constraint Satisfaction Problems
"... Abstract. We introduce two reactive frameworks for dynamically adapting some parameters of an Ant Colony Optimization (ACO) algorithm. Both reactive frameworks use ACO to adapt parameters: pheromone trails are associated with parameter values; these pheromone trails represent the learnt desirability ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We introduce two reactive frameworks for dynamically adapting some parameters of an Ant Colony Optimization (ACO) algorithm. Both reactive frameworks use ACO to adapt parameters: pheromone trails are associated with parameter values; these pheromone trails represent the learnt desirability of using parameter values and are used to dynamically set parameters in a probabilistic way. The two frameworks differ in the granularity of parameter learning. We experimentally evaluate these two frameworks on an ACO algorithm for solving constraint satisfaction problems. 1