Results 1  10
of
42
The CMA Evolution Strategy: A Comparing Review
 STUDFUZZ
, 2006
"... Derived from the concept of selfadaptation in evolution strategies, the CMA (Covariance Matrix Adaptation) adapts the covariance matrix of a multivariate normal search distribution. The CMA was originally designed to perform well with small populations. In this review, the argument starts out with ..."
Abstract

Cited by 98 (28 self)
 Add to MetaCart
Derived from the concept of selfadaptation in evolution strategies, the CMA (Covariance Matrix Adaptation) adapts the covariance matrix of a multivariate normal search distribution. The CMA was originally designed to perform well with small populations. In this review, the argument starts out with large population sizes, reflecting recent extensions of the CMA algorithm. Commonalities and differences to continuous Estimation of Distribution Algorithms are analyzed. The aspects of reliability of the estimation, overall step size control, and independence from the coordinate system (invariance) become particularly important in small populations sizes. Consequently, performing the adaptation task with small populations is more intricate.
A Computationally Efficient Evolutionary Algorithm for RealParameter Optimization
, 2002
"... Due to an increasing interest in solving realworld optimization problems using evolutionary algorithms (EAs), researchers have developed a number of realparameter genetic algorithms (GAs) in the recent past. In such studies, the main research effort is spent on developing an efficient recombina ..."
Abstract

Cited by 85 (10 self)
 Add to MetaCart
Due to an increasing interest in solving realworld optimization problems using evolutionary algorithms (EAs), researchers have developed a number of realparameter genetic algorithms (GAs) in the recent past. In such studies, the main research effort is spent on developing an efficient recombination operator. Such recombination operators use probability distributions around the parent solutions to create an ospring. Some operators emphasize solutions at the center of mass of parents and some around the parents. In this paper, we propose a generic parentcentric recombination operator (PCX) and a steadystate, elitepreserving, scalable, and computationally fast populationalteration model (we called the G3 model). The performance of the G3 model with the PCX operator is investigated on three commonlyused test problems and is compared with a number of evolutionary and classical optimization algorithms including other realparameter GAs with UNDX and SPX operators, the correlated selfadaptive evolution strategy, the dierential evolution technique and the quasiNewton method. The proposed approach is found to be consistently and reliably performing better than all other methods used in the study. A scaleup study with problem sizes up to 500 variables shows a polynomial computational complexity of the proposed approach. This extensive study clearly demonstrates the power of the proposed technique in tackling realparameter optimization problems.
Realcoded Memetic Algorithms with crossover hillclimbing
 Evolutionary Computation
, 2004
"... This paper presents a realcoded memetic algorithm that applies a crossover hillclimbing to solutions produced by the genetic operators. On the one hand, the memetic algorithm provides global search (reliability) by means of the promotion of high levels of population diversity. On the other, the cro ..."
Abstract

Cited by 70 (12 self)
 Add to MetaCart
(Show Context)
This paper presents a realcoded memetic algorithm that applies a crossover hillclimbing to solutions produced by the genetic operators. On the one hand, the memetic algorithm provides global search (reliability) by means of the promotion of high levels of population diversity. On the other, the crossover hillclimbing exploits the selfadaptive capacity of realparameter crossover operators with the aim of producing an effective local tuning on the solutions (accuracy). An important aspect of the memetic algorithm proposed is that it adaptively assigns different local search probabilities to individuals. It was observed that the algorithm adjusts the global/local search balance according to the particularities of each problem instance. Experimental results show that, for a wide range of problems, the method we propose here consistently outperforms other realcoded memetic algorithms which appeared in the literature.
Learning Probability Distributions in Continuous Evolutionary Algorithms  A Comparative Review
 Natural Computing
, 2003
"... We present a comparative review of Evolutionary Algorithms that generate new population members by sampling a probability distribution constructed during the optimization process. We present a unifying formulation for five such algorithms that enables us to characterize them based on the parametriza ..."
Abstract

Cited by 56 (14 self)
 Add to MetaCart
We present a comparative review of Evolutionary Algorithms that generate new population members by sampling a probability distribution constructed during the optimization process. We present a unifying formulation for five such algorithms that enables us to characterize them based on the parametrization of the probability distribution, the learning methodology, and the use of historical information. The algorithms are evaluated on a number of test functions in order to assess their relative strengths and weaknesses. This comparative review helps to identify areas of applicability for the algorithms and to guide future algorithmic developments.
A Method for Handling Uncertainty in Evolutionary Optimization with an Application to Feedback Control of Combustion
"... Abstract — We present a novel method for handling uncertainty in evolutionary optimization. The method entails quantification and treatment of uncertainty and relies on the rank based selection operator of evolutionary algorithms. The proposed uncertainty handling is implemented in the context of th ..."
Abstract

Cited by 50 (14 self)
 Add to MetaCart
(Show Context)
Abstract — We present a novel method for handling uncertainty in evolutionary optimization. The method entails quantification and treatment of uncertainty and relies on the rank based selection operator of evolutionary algorithms. The proposed uncertainty handling is implemented in the context of the Covariance Matrix Adaptation Evolution Strategy (CMAES) and verified on test functions. The present method is independent of the uncertainty distribution, prevents premature convergence of the evolution strategy and is well suited for online optimization as it requires only a small number of additional function evaluations. The algorithm is applied in an experimental setup to the online optimization of feedback controllers of thermoacoustic instabilities of gas turbine combustors. In order to mitigate these instabilities, gaindelay or modelbased H ∞ controllers sense the pressure and command secondary fuel injectors. The parameters of these controllers are usually specified via a trial and error procedure. We demonstrate that their online optimization with the proposed methodology enhances, in an automated fashion, the online performance of the controllers, even under highly unsteady operating conditions, and it also compensates for uncertainties in the modelbuilding and design process. I.
How To Analyse Evolutionary Algorithms
, 2002
"... Many variants of evolutionary algorithms have been designed and applied. The ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
Many variants of evolutionary algorithms have been designed and applied. The
Enhancing differential evolution performance with local search for high dimensional function optimization
, 2005
"... In this paper, we proposed Fittest Individual Refinement (FIR), a crossover based local search method for Differential Evolution (DE). The FIR scheme accelerates DE by enhancing its search capabilitythrough exploration of the neighborhood of the best solution in successive generations. The proposed ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we proposed Fittest Individual Refinement (FIR), a crossover based local search method for Differential Evolution (DE). The FIR scheme accelerates DE by enhancing its search capabilitythrough exploration of the neighborhood of the best solution in successive generations. The proposed memetic version of DE (augmented byFIR) is expected to obtain an acceptable solution with a lower number of evaluations particularlyfor higher dimensional functions. Using two different implementations DEfirDE and DEfirSPX we showed that proposed FIR increases the convergence velocityof DE for well known benchmark functions as well as improves the robustness of DE against variation of population. Experiments using multimodal landscape generator showed our proposed algorithms consistentlyoutperformed their parent algorithms. A performance comparison with reported results of well known real coded memetic algorithms is also presented.
SelfAdaptation in Evolutionary Algorithms
 Parameter Setting in Evolutionary Algorithm
, 2006
"... this paper, we will give an overview over the selfadaptive behavior of evolutionary algorithms. We will start with a short overview over the historical development of adaptation mechanisms in evolutionary computation. In the following part, i.e., Section 2.2, we will introduce classification scheme ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
(Show Context)
this paper, we will give an overview over the selfadaptive behavior of evolutionary algorithms. We will start with a short overview over the historical development of adaptation mechanisms in evolutionary computation. In the following part, i.e., Section 2.2, we will introduce classification schemes that are used to group the various approaches. Afterwards, selfadaptive mechanisms will be considered. The overview is started by some examples  introducing selfadaptation of the strategy parameter and of the crossover operator. Several authors have pointed out that the concept of selfadaptation may be extended. Section 3.2 is devoted to such ideas. The mechanism of selfadaptation has been examined in various areas in order to find answers to the question under which conditions selfadaptation works and when it could fail. In the remaining sections, therefore, we present a short overview over some of the research done in this field
An analysis of mutative σselfadaptation on linear fitness functions
 Evolutionary Computation
, 2006
"... This paper investigates σselfadaptation for real valued evolutionary algorithms on linear fitness functions. We identify the stepsize logarithm log σ as a key quantity to understand strategy behavior. Knowing the bias of mutation, recombination, and selection on log σ is sufficient to explain σd ..."
Abstract

Cited by 18 (12 self)
 Add to MetaCart
(Show Context)
This paper investigates σselfadaptation for real valued evolutionary algorithms on linear fitness functions. We identify the stepsize logarithm log σ as a key quantity to understand strategy behavior. Knowing the bias of mutation, recombination, and selection on log σ is sufficient to explain σdynamics and strategy behavior in many cases, even from previously reported results on nonlinear and/or noisy fitness functions. On a linear fitness function, if intermediate multirecombination is applied on the object parameters, the ith best and the ith worst individual have the same σdistribution. Consequently, the correlation between fitness and stepsize σ is zero. Assuming additionally that σchanges due to mutation and recombination are unbiased, then σselfadaptation enlarges σ if and only if µ < λ/2, given (µ, λ)truncation selection. Experiments show the relevance of the given assumptions.