Results 1  10
of
52
Comparing parameter tuning methods for evolutionary algorithms
 In Proceedings of the IEEE Congress on Evolutionary Computation (CEC
, 2009
"... Abstract — Tuning the parameters of an evolutionary algorithm (EA) to a given problem at hand is essential for good algorithm performance. Optimizing parameter values is, however, a nontrivial problem, beyond the limits of human problem solving.In this light it is odd that no parameter tuning algor ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
(Show Context)
Abstract — Tuning the parameters of an evolutionary algorithm (EA) to a given problem at hand is essential for good algorithm performance. Optimizing parameter values is, however, a nontrivial problem, beyond the limits of human problem solving.In this light it is odd that no parameter tuning algorithms are used widely in evolutionary computing. This paper is meant to be stepping stone towards a better practice by discussing the most important issues related to tuning EA parameters, describing a number of existing tuning methods, and presenting a modest experimental comparison among them. The paper is concluded by suggestions for future research – hopefully inspiring fellow researchers for further work. Index Terms — evolutionary algorithms, parameter tuning I. BACKGROUND AND OBJECTIVES Evolutionary Algorithms (EA) form a rich class of stochastic
M.: Extreme value based adaptive operator selection
 Proc. Intl. Conference on Parallel Solving from Nature, LNCS
, 2008
"... Abstract. Credit Assignment is an important ingredient of several proposals that have been made for Adaptive Operator Selection. Instead of the average fitness improvement of newborn offspring, this paper proposes to use some empirical order statistics of those improvements, arguing that rare but hi ..."
Abstract

Cited by 25 (11 self)
 Add to MetaCart
(Show Context)
Abstract. Credit Assignment is an important ingredient of several proposals that have been made for Adaptive Operator Selection. Instead of the average fitness improvement of newborn offspring, this paper proposes to use some empirical order statistics of those improvements, arguing that rare but highly beneficial jumps matter as much or more than frequent but small improvements. An extreme value based Credit Assignment is thus proposed, rewarding each operator with the best fitness improvement observed in a sliding window for this operator. This mechanism, combined with existing Adaptive Operator Selection rules, is investigated in an EClike setting. First results show that the proposed method allows both the Adaptive Pursuit and the Dynamic MultiArmed Bandit selection rules to actually track the best operators along evolution. 1
Parameter Tuning for Configuring and Analyzing Evolutionary Algorithms
 Swarm and Evolutionary Computation
, 2011
"... In this paper we present a conceptual framework for parameter tuning, provide a survey of tuning methods, and discuss related methodological issues. The framework is based on a threetier hierarchy of a problem, an evolutionary algorithm (EA), and a tuner. Furthermore, we distinguish problem instanc ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
(Show Context)
In this paper we present a conceptual framework for parameter tuning, provide a survey of tuning methods, and discuss related methodological issues. The framework is based on a threetier hierarchy of a problem, an evolutionary algorithm (EA), and a tuner. Furthermore, we distinguish problem instances, parameters, and EA performance measures as major factors, and discuss how tuning can be directed to algorithm performance and/or robustness. For the survey part we establish different taxonomies to categorize tuning methods and review existing work. Finally, we elaborate on how tuning can improve methodology by facilitating wellfunded experimental comparisons and algorithm analysis.
Costs and Benefits of Tuning Parameters of Evolutionary Algorithms
"... Abstract. We present an empirical study on the impact of different design choices on the performance of an evolutionary algorithm (EA). Four EA components are considered—parent selection, survivor selection, recombination and mutation—and for each component we study the impact of choosing the right ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
(Show Context)
Abstract. We present an empirical study on the impact of different design choices on the performance of an evolutionary algorithm (EA). Four EA components are considered—parent selection, survivor selection, recombination and mutation—and for each component we study the impact of choosing the right operator and of tuning its free parameter(s). We tune 120 different combinations of EA operators to 4 different classes of fitness landscapes and measure the cost of tuning. We find that components differ greatly in importance. Typically the choice of operator for parent selection has the greatest impact, and mutation needs the most tuning. Regarding individual EAs however, the impact of design choices for one component depends on the choices for other components, as well as on the available amount of resources for tuning. 1
Analysis of Adaptive Operator Selection Techniques on the Royal Road and Long KPath Problems
, 2009
"... One of the choices that most affect the performance of Evolutionary Algorithms is the selection of the variation operators that are efficient to solve the problem at hand. This work presents an empirical analysis of different Adaptive Operator Selection (AOS) methods, i.e., techniques that automatic ..."
Abstract

Cited by 17 (10 self)
 Add to MetaCart
(Show Context)
One of the choices that most affect the performance of Evolutionary Algorithms is the selection of the variation operators that are efficient to solve the problem at hand. This work presents an empirical analysis of different Adaptive Operator Selection (AOS) methods, i.e., techniques that automatically select the operator to be applied among the available ones, while searching for the solution. Four previously published operator selection rules are combined to four different credit assignment mechanisms. These 16 AOS combinations are analyzed and compared in the light of two wellknown benchmark problems in Evolutionary Computation, the Royal Road and the Long KPath.
Analyzing Banditbased Adaptive Operator Selection Mechanisms
 N/P, AMAI – SPECIAL ISSUE ON LION
, 2010
"... Several techniques have been proposed to tackle the Adaptive Operator Selection (AOS) issue in Evolutionary Algorithms. Some recent proposals are based on the MultiArmed Bandit (MAB) paradigm: each operator is viewed as one arm of a MAB problem, and the rewards are mainly based on the fitness imp ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
(Show Context)
Several techniques have been proposed to tackle the Adaptive Operator Selection (AOS) issue in Evolutionary Algorithms. Some recent proposals are based on the MultiArmed Bandit (MAB) paradigm: each operator is viewed as one arm of a MAB problem, and the rewards are mainly based on the fitness improvement brought by the corresponding operator to the individual it is applied to. However, the AOS problem is dynamic, whereas standard MAB algorithms are known to optimally solve the exploitation versus exploration tradeoff in static settings. An original dynamic variant of the standard MAB Upper Confidence Bound algorithm is proposed here, using a sliding time window to compute both its exploitation and exploration terms. In order to perform sound comparisons between AOS algorithms, artificial scenarios have been proposed in the literature. They are extended here toward smoother transitions between different reward settings. The resulting original testbed also includes a real evolutionary algorithm that is applied to the wellknown Royal Road problem. It is used here to perform a thorough analysis of the behavior of AOS algorithms, to assess their sensitivity with respect to their own hyperparameters, and to propose a sound
Algorithm runtime prediction: Methods and evaluation
 Artificial Intelligence J
, 2014
"... Perhaps surprisingly, it is possible to predict how long an algorithm will take to run on a previously unseen input, using machine learning techniques to build a model of the algorithm’s runtime as a function of problemspecific instance features. Such models have important applications to algorithm ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
(Show Context)
Perhaps surprisingly, it is possible to predict how long an algorithm will take to run on a previously unseen input, using machine learning techniques to build a model of the algorithm’s runtime as a function of problemspecific instance features. Such models have important applications to algorithm analysis, portfoliobased algorithm selection, and the automatic configuration of parameterized algorithms. Over the past decade, a wide variety of techniques have been studied for building such models. Here, we describe extensions and improvements of existing models, new families of models, and— perhaps most importantly—a much more thorough treatment of algorithm parameters as model inputs. We also comprehensively describe new and existing features for predicting algorithm runtime for propositional satisfiability (SAT), travelling salesperson (TSP) and mixed integer programming (MIP) problems. We evaluate these innovations through the largest empirical analysis of its kind, comparing to a wide range of runtime modelling techniques from the literature. Our experiments consider 11 algorithms and 35 instance distributions; they also span a very wide range of SAT, MIP, and TSP instances, with the least structured having been generated uniformly at random and the most structured having emerged from real industrial applications. Overall, we demonstrate that our new models yield substantially better runtime predictions than previous approaches in terms of their generalization to new problem instances, to new algorithms from a parameterized space, and to both simultaneously.
Dynamic multiarmed bandits and extreme valuebased rewards for adaptive operator selection in evolutionary algorithms
, 2009
"... Abstract. The performance of many efficient algorithms critically depends on the tuning of their parameters, which on turn depends on the problem at hand. For example, the performance of Evolutionary Algorithms critically depends on the judicious setting of the operator rates. The Adaptive Operator ..."
Abstract

Cited by 15 (9 self)
 Add to MetaCart
Abstract. The performance of many efficient algorithms critically depends on the tuning of their parameters, which on turn depends on the problem at hand. For example, the performance of Evolutionary Algorithms critically depends on the judicious setting of the operator rates. The Adaptive Operator Selection (AOS) heuristic that is proposed here rewards each operator based on the extreme value of the fitness improvement lately incurred by this operator, and uses a MultiArmed Bandit (MAB) selection process based on those rewards to choose which operator to apply next. This Extremebased MultiArmed Bandit approach is experimentally validated against the Averagebased MAB method, and is shown to outperform previously published methods, whether using a classical Averagebased rewarding technique or the same Extremebased mechanism. The validation test suite includes the easy OneMax problem and a family of hard problems known as “Long kpaths”. 1
Tuning & Simplifying Heuristical Optimization
, 2010
"... This thesis is about the tuning and simplification of blackbox (directsearch, derivativefree) optimization methods, which by definition do not use gradient information to guide their search for an optimum but merely need a fitness (cost, error, objective) measure for each candidate solution to th ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
This thesis is about the tuning and simplification of blackbox (directsearch, derivativefree) optimization methods, which by definition do not use gradient information to guide their search for an optimum but merely need a fitness (cost, error, objective) measure for each candidate solution to the optimization problem. Such optimization methods often have parameters that influence their behaviour and efficacy. A MetaOptimization technique is presented here for tuning the behavioural parameters of an optimization method by employing an additional layer of optimization. This is used in a number of experiments on two popular optimization methods, Differential Evolution and Particle Swarm Optimization, and unveils the true performance capabilities of an optimizer in different usage scenarios. It is found that stateoftheart optimizer variants with their supposedly adaptive behavioural parameters do not have a general and consistent performance advantage but are outperformed in several cases by simplified optimizers, if only the behavioural parameters are tuned properly.
Modern continuous optimization algorithms for tuning real and integer algorithm parameters
 LNCS 6234. Proceedings of the International Conference on Swarm Intelligence (ANTS 2010
, 2010
"... Abstract. To obtain peak performance from optimization algorithms, it is required to set appropriately their parameters. Frequently, algorithm parameters can take values from the set of real numbers, or from a large integer set. To tune this kind of parameters, it is interesting to apply stateofth ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Abstract. To obtain peak performance from optimization algorithms, it is required to set appropriately their parameters. Frequently, algorithm parameters can take values from the set of real numbers, or from a large integer set. To tune this kind of parameters, it is interesting to apply stateoftheart continuous optimization algorithms instead of using a tedious, and errorprone, handson approach. In this paper, we study the performance of several continuous optimization algorithms for the algorithm parameter tuning task. As case studies, we use a number of optimization algorithms from the swarm intelligence literature. 1