Results 11  20
of
359
A Review on the Ant Colony Optimization Metaheuristic: Basis, Models and New Trends
 Mathware & Soft Computing
, 2002
"... Ant Colony Optimization (ACO) is a recent metaheuristic method that is inspired by the behavior of real ant colonies. In this paper, we review the underlying ideas of this approach that lead from the biological inspiration to the ACO metaheuristic, which gives a set of rules of how to apply ACO ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
Ant Colony Optimization (ACO) is a recent metaheuristic method that is inspired by the behavior of real ant colonies. In this paper, we review the underlying ideas of this approach that lead from the biological inspiration to the ACO metaheuristic, which gives a set of rules of how to apply ACO algorithms to challenging combinatorial problems. We present some of the algorithms that were developed under this framework, give an overview of current applications, and analyze the relationship between ACO and some of the best known metaheuristics. In addition, we describe recent theoretical developments in the eld and we conclude by showing several new trends and new research directions in this eld.
Comparing parameter tuning methods for evolutionary algorithms
 In Proceedings of the IEEE Congress on Evolutionary Computation (CEC
, 2009
"... Abstract — Tuning the parameters of an evolutionary algorithm (EA) to a given problem at hand is essential for good algorithm performance. Optimizing parameter values is, however, a nontrivial problem, beyond the limits of human problem solving.In this light it is odd that no parameter tuning algor ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
(Show Context)
Abstract — Tuning the parameters of an evolutionary algorithm (EA) to a given problem at hand is essential for good algorithm performance. Optimizing parameter values is, however, a nontrivial problem, beyond the limits of human problem solving.In this light it is odd that no parameter tuning algorithms are used widely in evolutionary computing. This paper is meant to be stepping stone towards a better practice by discussing the most important issues related to tuning EA parameters, describing a number of existing tuning methods, and presenting a modest experimental comparison among them. The paper is concluded by suggestions for future research – hopefully inspiring fellow researchers for further work. Index Terms — evolutionary algorithms, parameter tuning I. BACKGROUND AND OBJECTIVES Evolutionary Algorithms (EA) form a rich class of stochastic
A review of adaptive population sizing schemes in genetic algorithms
 In: Proc. GECCO’05
, 2005
"... This paper reviews the topic of population sizing in genetic algorithms. It starts by revisiting theoretical models which rely on a facetwise decomposition of genetic algorithms, and then moves on to various selfadjusting population sizing schemes that have been proposed in the literature. The pap ..."
Abstract

Cited by 28 (4 self)
 Add to MetaCart
(Show Context)
This paper reviews the topic of population sizing in genetic algorithms. It starts by revisiting theoretical models which rely on a facetwise decomposition of genetic algorithms, and then moves on to various selfadjusting population sizing schemes that have been proposed in the literature. The paper ends with recommendations for those who design and compare adaptive population sizing schemes for genetic algorithms.
Grammar Modelbased Program Evolution
 In Proceedings of the 2004 IEEE Congress on Evolutionary Computation
, 2004
"... In Evolutionary Computation, genetic operators, such as mutation and crossover, are employed to perturb individuals to generate the next population. However these fixed, problem independent genetic operators may destroy the subsolution, usually called building blocks, instead of discovering and pres ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
(Show Context)
In Evolutionary Computation, genetic operators, such as mutation and crossover, are employed to perturb individuals to generate the next population. However these fixed, problem independent genetic operators may destroy the subsolution, usually called building blocks, instead of discovering and preserving them. One way to overcome this problem is to build a model based on the good individuals, and sample this model to obtain the next population. There is a wide range of such work in Genetic Algorithms
Combining metaheuristics and exact algorithms in combinatorial optimization: a survey and classification
 In: Proc. the First International WorkConference on the Interplay Between Natural and Artificial Computation, LNCS
, 2005
"... Abstract. In this survey we discuss different stateoftheart approaches of combining exact algorithms and metaheuristics to solve combinatorial optimization problems. Some of these hybrids mainly aim at providing optimal solutions in shorter time, while others primarily focus on getting better heu ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
(Show Context)
Abstract. In this survey we discuss different stateoftheart approaches of combining exact algorithms and metaheuristics to solve combinatorial optimization problems. Some of these hybrids mainly aim at providing optimal solutions in shorter time, while others primarily focus on getting better heuristic solutions. The two main categories in which we divide the approaches are collaborative versus integrative combinations. We further classify the different techniques in a hierarchical way. Altogether, the surveyed work on combinations of exact algorithms and metaheuristics documents the usefulness and strong potential of this research direction. 1
Parallel estimation of distribution algorithms
, 2002
"... The thesis deals with the new evolutionary paradigm based on the concept of Estimation of Distribution Algorithms (EDAs) that use probabilistic model of promising solutions found so far to obtain new candidate solutions of optimized problem. There are six primary goals of this thesis: 1. Suggestion ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
The thesis deals with the new evolutionary paradigm based on the concept of Estimation of Distribution Algorithms (EDAs) that use probabilistic model of promising solutions found so far to obtain new candidate solutions of optimized problem. There are six primary goals of this thesis: 1. Suggestion of a new formal description of EDA algorithm. This high level concept can be used to compare the generality of various probabilistic models by comparing the properties of underlying mappings. Also, some convergence issues are discussed and theoretical ways for further improvements are proposed. 2. Development of new probabilistic model and methods capable of dealing with continuous parameters. The resulting Mixed Bayesian Optimization Algorithm (MBOA) uses a set of decision trees to express the probability model. Its main advantage against the mostly used IDEA and EGNA approach is its backward compatibility with discrete domains, so it is uniquely capable of learning linkage between mixed continuousdiscrete genes. MBOA handles the discretization of continuous parameters as an integral part of the learning process, which outperforms the histogrambased
InformationGeometric Optimization Algorithms: A Unifying Picture via Invariance Principles
, 2011
"... We present a canonical way to turn any smooth parametric family of probability distributions on an arbitrary search space X into a continuoustime blackbox optimization method on X, the informationgeometric optimization (IGO) method. Invariance as a major design principle keeps the number of arbitr ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
(Show Context)
We present a canonical way to turn any smooth parametric family of probability distributions on an arbitrary search space X into a continuoustime blackbox optimization method on X, the informationgeometric optimization (IGO) method. Invariance as a major design principle keeps the number of arbitrary choices to a minimum. The resulting method conducts a natural gradient ascent using an adaptive, timedependent transformation of the objective function, and makes no particular assumptions on the objective function to be optimized. The IGO method produces explicit IGO algorithms through time discretization. The crossentropy method is recovered in a particular case with a large time step, and can be extended into a smoothed, parametrizationindependent maximum likelihood update. When applied to specific families of distributions on discrete or continuous spaces, the IGO framework allows to naturally recover versions
Analyzing probabilistic models in hierarchical boa on traps and spin glasses
 Genetic and Evolutionary Computation Conference (GECCO2007), I
, 2007
"... The hierarchical Bayesian optimization algorithm (hBOA) can solve nearly decomposable and hierarchical problems of bounded difficulty in a robust and scalable manner by building and sampling probabilistic models of promising solutions. This paper analyzes probabilistic models in hBOA on two common t ..."
Abstract

Cited by 25 (17 self)
 Add to MetaCart
(Show Context)
The hierarchical Bayesian optimization algorithm (hBOA) can solve nearly decomposable and hierarchical problems of bounded difficulty in a robust and scalable manner by building and sampling probabilistic models of promising solutions. This paper analyzes probabilistic models in hBOA on two common test problems: concatenated traps and 2D Ising spin glasses with periodic boundary conditions. We argue that although Bayesian networks with local structures can encode complex probability distributions, analyzing these models in hBOA is relatively straightforward and the results of such analyses may provide practitioners with useful information about their problems. The results show that the probabilistic models in hBOA closely correspond to the structure of the underlying problem, the models do not change significantly in subsequent iterations of BOA, and creating adequate probabilistic models by hand is not straightforward even with complete knowledge of the optimization problem. Categories and Subject Descriptors
Mathematical Modelling of UMDAc Algorithm with Tournament Selection. Behaviour on Linear and Quadratic Functions
, 2002
"... This paper presents a theoretical study of the behaviour of the Univariate Marginal Distribution Algorithm for continuous domains (UMDAc ) in dimension n. To this end, the algorithm with tournament selection is modelled mathematically, assuming an infinite number of tournaments. The mathematical mod ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
This paper presents a theoretical study of the behaviour of the Univariate Marginal Distribution Algorithm for continuous domains (UMDAc ) in dimension n. To this end, the algorithm with tournament selection is modelled mathematically, assuming an infinite number of tournaments. The mathematical model is then used to study the algorithm's behaviour in the minimization of linear functions L(x) = a0 + i=1 a i x i and quadratic function Q(x) = i , with x = (x1 , . . . , xn ) and a i IR, i = 0, 1, . . . , n. Linear functions are used to model the algorithm when far from the optimum, while quadratic function is used to analyze the algorithm when near the optimum. The analysis shows that the algorithm performs poorly in the linear function L1 (x) = i=1 x i . In the case of quadratic function Q(x) the algorithm 's behaviour was analyzed for certain particular dimensions. After taking into account some simplifications we can conclude that when the algorithm starts near the optimum, UMDAc is able to reach it. Moreover the speed of convergence to the optimum decreases as the dimension increases.
DE/EDA: A new evolutionary algorithm for global optimization
 Information Sciences
, 2005
"... Differential Evolution (DE) was very successful in solving the global continuous optimization problem. It mainly uses the distance and direction information from the current population to guide its further search. Estimation of Distribution Algorithm (EDA) samples new solutions from a probability mo ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
(Show Context)
Differential Evolution (DE) was very successful in solving the global continuous optimization problem. It mainly uses the distance and direction information from the current population to guide its further search. Estimation of Distribution Algorithm (EDA) samples new solutions from a probability model which characterizes the distribution of promising solutions. This paper proposes a combination of DE and EDA (DE/EDA) for the global continuous optimization problem. DE/EDA combines global information extracted by EDA with differential information obtained by DE to create promising solutions. DE/EDA has been compared with the best version of the DE algorithm and an EDA on several commonly utilized test problems. Experimental results demonstrate that DE/EDA outperforms the DE algorithm and the EDA. The effect of the parameters of DE/EDA to its performance is investigated experimentally.