Results 1  10
of
160
Differential evolution algorithm with strategy adaptation for global numerical optimization
 IEEE Trans. Evol. Comput
, 2009
"... Abstract—Differential evolution (DE) is an efficient and powerful populationbased stochastic search technique for solving optimization problems over continuous space, which has been widely applied in many scientific and engineering fields. However, the success of DE in solving a specific problem c ..."
Abstract

Cited by 107 (8 self)
 Add to MetaCart
(Show Context)
Abstract—Differential evolution (DE) is an efficient and powerful populationbased stochastic search technique for solving optimization problems over continuous space, which has been widely applied in many scientific and engineering fields. However, the success of DE in solving a specific problem crucially depends on appropriately choosing trial vector generation strategies and their associated control parameter values. Employing a trialanderror scheme to search for the most suitable strategy and its associated parameter settings requires high computational costs. Moreover, at different stages of evolution, different strategies coupled with different parameter settings may be required in order to achieve the best performance. In this paper, we propose a selfadaptive DE (SaDE) algorithm, in which both trial vector generation strategies and their associated control parameter values are gradually selfadapted by learning from their previous experiences in generating promising solutions. Consequently, a more suitable generation strategy along with its parameter settings can be determined adaptively to match different phases of the search process/evolution. The performance of the SaDE algorithm is extensively evaluated (using codes available from P. N. Suganthan) on a suite of 26 boundconstrained numerical optimization problems and compares favorably with the conventional DE and several stateoftheart parameter adaptive DE variants. Index Terms—Differential evolution (DE), global numerical optimization, parameter adaptation, selfadaptation, strategy adaptation. I.
Adaptive Particle Swarm Optimization
, 2008
"... This paper proposes an adaptive particle swarm optimization (APSO) with adaptive parameters and elitist learning strategy (ELS) based on the evolutionary state estimation (ESE) approach. The ESE approach develops an ‘evolutionary factor’ by using the population distribution information and relative ..."
Abstract

Cited by 55 (2 self)
 Add to MetaCart
(Show Context)
This paper proposes an adaptive particle swarm optimization (APSO) with adaptive parameters and elitist learning strategy (ELS) based on the evolutionary state estimation (ESE) approach. The ESE approach develops an ‘evolutionary factor’ by using the population distribution information and relative particle fitness information in each generation, and estimates the evolutionary state through a fuzzy classification method. According to the identified state and taking into account various effects of the algorithmcontrolling parameters, adaptive control strategies are developed for the inertia weight and acceleration coefficients for faster convergence speed. Further, an adaptive ‘elitist learning strategy ’ (ELS) is designed for the best particle to jump out of possible local optima and/or to refine its accuracy, resulting in substantially improved quality of global solutions. The APSO algorithm is tested on 6 unimodal and multimodal functions, and the experimental results demonstrate that the APSO generally outperforms the compared PSOs, in terms of solution accuracy, convergence speed and algorithm reliability.
Natural Evolution Strategies
"... Abstract — This paper presents Natural Evolution Strategies (NES), a novel algorithm for performing realvalued ‘black box ’ function optimization: optimizing an unknown objective function where algorithmselected function measurements constitute the only information accessible to the method. Natura ..."
Abstract

Cited by 42 (23 self)
 Add to MetaCart
(Show Context)
Abstract — This paper presents Natural Evolution Strategies (NES), a novel algorithm for performing realvalued ‘black box ’ function optimization: optimizing an unknown objective function where algorithmselected function measurements constitute the only information accessible to the method. Natural Evolution Strategies search the fitness landscape using a multivariate normal distribution with a selfadapting mutation matrix to generate correlated mutations in promising regions. NES shares this property with Covariance Matrix Adaption (CMA), an Evolution Strategy (ES) which has been shown to perform well on a variety of highprecision optimization tasks. The Natural Evolution Strategies algorithm, however, is simpler, less adhoc and more principled. Selfadaptation of the mutation matrix is derived using a Monte Carlo estimate of the natural gradient towards better expected fitness. By following the natural gradient instead of the ‘vanilla ’ gradient, we can ensure efficient update steps while preventing early convergence due to overly greedy updates, resulting in reduced sensitivity to local suboptima. We show NES has competitive performance with CMA on unimodal tasks, while outperforming it on several multimodal tasks that are rich in deceptive local optima. I.
Differential Evolution Using a NeighborhoodBased Mutation Operator
, 2009
"... Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. It has reportedly outperformed a few evolutionary algorithms (EAs) and other search heuristics like the particle swarm optimization (PSO) when tested over both benchmark and re ..."
Abstract

Cited by 35 (8 self)
 Add to MetaCart
Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. It has reportedly outperformed a few evolutionary algorithms (EAs) and other search heuristics like the particle swarm optimization (PSO) when tested over both benchmark and realworld problems. DE, however, is not completely free from the problems of slow and/or premature convergence. This paper describes a family of improved variants of the DE/targettobest/1/bin scheme, which utilizes the concept of the neighborhood of each population member. The idea of small neighborhoods, defined over the indexgraph of parameter vectors, draws inspiration from the community of the PSO algorithms. The proposed schemes balance the exploration and exploitation abilities of DE without imposing serious additional burdens in terms of function evaluations. They are shown to be statistically significantly better than or at least comparable to several existing DE variants as well as a few other significant evolutionary computing techniques over a test suite of 24 benchmark functions. The paper also investigates the applications of the new DE variants to two reallife problems concerning parameter estimation for frequency modulated sound waves and spread spectrum radar polyphase code design.
Evolving Problems to Learn About Particle Swarm Optimizers and . . .
 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
, 2007
"... We use evolutionary computation (EC) to automatically find problems which demonstrate the strength and weaknesses of modern search heuristics. In particular, we analyze particle swarm optimization (PSO), differential evolution (DE), and covariance matrix adaptationevolution strategy (CMAES). Each ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
(Show Context)
We use evolutionary computation (EC) to automatically find problems which demonstrate the strength and weaknesses of modern search heuristics. In particular, we analyze particle swarm optimization (PSO), differential evolution (DE), and covariance matrix adaptationevolution strategy (CMAES). Each evolutionary algorithm is contrasted with the others and with a robust nonstochastic gradient follower (i.e., a hill climber) based on Newton–Raphson. The evolved benchmark problems yield insights into the operation of PSOs, illustrate benefits and drawbacks of different population sizes, velocity limits, and constriction (friction) coefficients. The fitness landscapes made by genetic programming reveal new swarm phenomena, such as deception, thereby explaining how they work and allowing us to devise better extended particle swarm systems. The method could be applied to any type of optimizer.
Stochastic Search using the Natural Gradient
"... To optimize unknown ‘fitness ’ functions, we present Natural Evolution Strategies, a novel algorithm that constitutes a principled alternative to standard stochastic search methods. It maintains a multinormal distribution on the set of solution candidates. The Natural Gradient is used to update the ..."
Abstract

Cited by 23 (12 self)
 Add to MetaCart
(Show Context)
To optimize unknown ‘fitness ’ functions, we present Natural Evolution Strategies, a novel algorithm that constitutes a principled alternative to standard stochastic search methods. It maintains a multinormal distribution on the set of solution candidates. The Natural Gradient is used to update the distribution’s parameters in the direction of higher expected fitness, by efficiently calculating the inverse of the exact Fisher information matrix whereas previous methods had to use approximations. Other novel aspects of our method include optimal fitness baselines and importance mixing, a procedure adjusting batches with minimal numbers of fitness evaluations. The algorithm yields competitive results on a number of benchmarks. 1.
Differential Evolution With Composite Trial Vector Generation Strategies and Control Parameters
 IEEE Tran. Evol. Comput
"... Abstract—Trial vector generation strategies and control parameters have a significant influence on the performance of differential evolution (DE). This paper studies whether the performance of DE can be improved by combining several effective trial vector generation strategies with some suitable co ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
Abstract—Trial vector generation strategies and control parameters have a significant influence on the performance of differential evolution (DE). This paper studies whether the performance of DE can be improved by combining several effective trial vector generation strategies with some suitable control parameter settings. A novel method, called composite DE (CoDE), has been proposed in this paper. This method uses three trial vector generation strategies and three control parameter settings. It randomly combines them to generate trial vectors. CoDE has been tested on all the CEC2005 contest test instances. Experimental results show that CoDE is very competitive. Index Terms—Control parameters, differential evolution, global numerical optimization, trial vector generation strategy. I.
Adaptive Strategy Selection in Differential Evolution
 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO)
, 2010
"... Differential evolution (DE) is a simple yet powerful evolutionary algorithm for global numerical optimization. Different strategies have been proposed for the offspring generation; but the selection of which of them should be applied is critical for the DE performance, besides being problemdependen ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
(Show Context)
Differential evolution (DE) is a simple yet powerful evolutionary algorithm for global numerical optimization. Different strategies have been proposed for the offspring generation; but the selection of which of them should be applied is critical for the DE performance, besides being problemdependent. In this paper, the probability matching technique is employed in DE to autonomously select the most suitable strategy while solving the problem. Four credit assignment methods, that update the known performance of each strategy based on the relative fitness improvement achieved by its recent applications, are analyzed. To evaluate the performance of our approach, thirteen widely used benchmark functions are used. Experimental results confirm that our approach is able to adaptively choose the suitable strategy for different problems. Compared to classical DE algorithms and to a recently proposed adaptive scheme (SaDE), it obtains better results in most of the functions, in terms of the quality of the final results and convergence speed.
Efficient Natural Evolution Strategies
 GECCO '09
, 2009
"... Efficient Natural Evolution Strategies (eNES) is a novel alternative to conventional evolutionary algorithms, using the natural gradient to adapt the mutation distribution. Unlike previous methods based on natural gradients, eNES uses a fast algorithm to calculate the inverse of the exact Fisher inf ..."
Abstract

Cited by 19 (10 self)
 Add to MetaCart
Efficient Natural Evolution Strategies (eNES) is a novel alternative to conventional evolutionary algorithms, using the natural gradient to adapt the mutation distribution. Unlike previous methods based on natural gradients, eNES uses a fast algorithm to calculate the inverse of the exact Fisher information matrix, thus increasing both robustness and performance of its evolution gradient estimation, even in higher dimensions. Additional novel aspects of eNES include optimal fitness baselines and importance mixing (a procedure for updating the population with very few fitness evaluations). The algorithm yields competitive results on both unimodal and multimodal benchmarks.
Benchmark functions for the cec’2010 special session and competition on largescale global optimization
 Nature Inspired Computation and Applications Laboratory
, 2009
"... In the past decades, different kinds of metaheuristic optimization algorithms [1, 2] have been developed; Simulated ..."
Abstract

Cited by 17 (10 self)
 Add to MetaCart
(Show Context)
In the past decades, different kinds of metaheuristic optimization algorithms [1, 2] have been developed; Simulated