Results 1  10
of
33
Sequential ModelBased Optimization for General Algorithm Configuration (extended version)
"... Abstract. Stateoftheart algorithms for hard computational problems often expose many parameters that can be modified to improve empirical performance. However, manually exploring the resulting combinatorial space of parameter settings is tedious and tends to lead to unsatisfactory outcomes. Recen ..."
Abstract

Cited by 98 (28 self)
 Add to MetaCart
(Show Context)
Abstract. Stateoftheart algorithms for hard computational problems often expose many parameters that can be modified to improve empirical performance. However, manually exploring the resulting combinatorial space of parameter settings is tedious and tends to lead to unsatisfactory outcomes. Recently, automated approaches for solving this algorithm configuration problem have led to substantial improvements in the state of the art for solving various problems. One promising approach constructs explicit regression models to describe the dependence of target algorithm performance on parameter settings; however, this approach has so far been limited to the optimization of few numerical algorithm parameters on single instances. In this paper, we extend this paradigm for the first time to general algorithm configuration problems, allowing many categorical parameters and optimization for sets of instances. We experimentally validate our new algorithm configuration procedure by optimizing a local search and a tree search solver for the propositional satisfiability problem (SAT), as well as the commercial mixed integer programming (MIP) solver CPLEX. In these experiments, our procedure yielded stateoftheart performance, and in many cases outperformed the previous best configuration approach. 1
Automated Configuration of Algorithms for Solving Hard Computational Problems
, 2009
"... The bestperforming algorithms for many hard problems are highly parameterized. Selecting the best heuristics and tuning their parameters for optimal overall performance is often a difficult, tedious, and unsatisfying task. This thesis studies the automation of this important part of algorithm desig ..."
Abstract

Cited by 29 (11 self)
 Add to MetaCart
The bestperforming algorithms for many hard problems are highly parameterized. Selecting the best heuristics and tuning their parameters for optimal overall performance is often a difficult, tedious, and unsatisfying task. This thesis studies the automation of this important part of algorithm design: the configuration of discrete algorithm components and their continuous parameters to construct an algorithm with desirable empirical performance characteristics. Automated configuration procedures can facilitate algorithm development and be applied on the end user side to optimize performance for new instance types and optimization objectives. The use of such procedures separates highlevel cognitive tasks carried out by humans from tedious lowlevel tasks that can be left to machines. We introduce two alternative algorithm configuration frameworks: iterated local search in parameter configuration space and sequential optimization based on response surface models. To the best of our knowledge, our local search approach is the first that goes beyond local optima. Our modelbased search techniques significantly outperform existing techniques and extend them in ways crucial for general algorithm configuration: they can handle categorical parameters, optimization objectives defined across multiple instances, and tens of thousands
Comparing parameter tuning methods for evolutionary algorithms
 In Proceedings of the IEEE Congress on Evolutionary Computation (CEC
, 2009
"... Abstract — Tuning the parameters of an evolutionary algorithm (EA) to a given problem at hand is essential for good algorithm performance. Optimizing parameter values is, however, a nontrivial problem, beyond the limits of human problem solving.In this light it is odd that no parameter tuning algor ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
(Show Context)
Abstract — Tuning the parameters of an evolutionary algorithm (EA) to a given problem at hand is essential for good algorithm performance. Optimizing parameter values is, however, a nontrivial problem, beyond the limits of human problem solving.In this light it is odd that no parameter tuning algorithms are used widely in evolutionary computing. This paper is meant to be stepping stone towards a better practice by discussing the most important issues related to tuning EA parameters, describing a number of existing tuning methods, and presenting a modest experimental comparison among them. The paper is concluded by suggestions for future research – hopefully inspiring fellow researchers for further work. Index Terms — evolutionary algorithms, parameter tuning I. BACKGROUND AND OBJECTIVES Evolutionary Algorithms (EA) form a rich class of stochastic
Parameter Tuning for Configuring and Analyzing Evolutionary Algorithms
 Swarm and Evolutionary Computation
, 2011
"... In this paper we present a conceptual framework for parameter tuning, provide a survey of tuning methods, and discuss related methodological issues. The framework is based on a threetier hierarchy of a problem, an evolutionary algorithm (EA), and a tuner. Furthermore, we distinguish problem instanc ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
In this paper we present a conceptual framework for parameter tuning, provide a survey of tuning methods, and discuss related methodological issues. The framework is based on a threetier hierarchy of a problem, an evolutionary algorithm (EA), and a tuner. Furthermore, we distinguish problem instances, parameters, and EA performance measures as major factors, and discuss how tuning can be directed to algorithm performance and/or robustness. For the survey part we establish different taxonomies to categorize tuning methods and review existing work. Finally, we elaborate on how tuning can improve methodology by facilitating wellfunded experimental comparisons and algorithm analysis.
Design and analysis of optimization algorithms using computational statistics
 Applied Numerical Analysis & Computational Mathematics (ANACM
, 2004
"... We propose a highly flexible sequential methodology for the experimental analysis of optimization algorithms. The proposed technique employs computational statistic methods to investigate the interactions among optimization problems, algorithms, and environments. The workings of the proposed techniq ..."
Abstract

Cited by 21 (12 self)
 Add to MetaCart
(Show Context)
We propose a highly flexible sequential methodology for the experimental analysis of optimization algorithms. The proposed technique employs computational statistic methods to investigate the interactions among optimization problems, algorithms, and environments. The workings of the proposed technique are illustrated on the parameterization and comparison of both a population–based and a direct search algorithm, on a well– known benchmark problem, as well as on a simplified model of a real–world problem. Experimental results are reported and conclusions are derived. c ○ 2004 WILEYVCH Verlag GmbH & Co. KGaA, Weinheim 1
On parameter tuning in search based software engineering
 In Proc. SSBSE
, 2011
"... Abstract. When applying searchbased software engineering (SBSE) techniques one is confronted with a multitude of different parameters that need to be chosen: Which population size for a genetic algorithm? Which selection mechanism to use? What settings to use for dozens of other parameters? This pr ..."
Abstract

Cited by 20 (10 self)
 Add to MetaCart
(Show Context)
Abstract. When applying searchbased software engineering (SBSE) techniques one is confronted with a multitude of different parameters that need to be chosen: Which population size for a genetic algorithm? Which selection mechanism to use? What settings to use for dozens of other parameters? This problem not only troubles users who want to apply SBSE tools in practice, but also researchers performing experimentation – how to compare algorithms that can have different parameter settings? To shed light on the problem of parameters, we performed the largest empirical analysis on parameter tuning in SBSE to date, collecting and statistically analysing data from more than a million experiments. As case study, we chose test data generation, one of the most popular problems in SBSE. Our data confirm that tuning does have a critical impact on algorithmic performance, and overfitting of parameter tuning is a dire threat to external validity of empirical analyses in SBSE. Based on this large empirical evidence, we give guidelines on how to handle parameter tuning.
Stochastic local search algorithms for the graph set Tcolouring . . .
 APPROXIMATION ALGORITHMS AND METAHEURISTICS; COMPUTER AND INFORMATION SCIENCE SERIES
, 2005
"... The graph set Tcolouring problem (GSTCP) generalises the classical graph colouring problem; it asks for the assignment of sets of integers to the vertices of a graph such that constraints on the separation of any two numbers assigned to a single vertex or to adjacent vertices are satisfied and some ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
The graph set Tcolouring problem (GSTCP) generalises the classical graph colouring problem; it asks for the assignment of sets of integers to the vertices of a graph such that constraints on the separation of any two numbers assigned to a single vertex or to adjacent vertices are satisfied and some objective function is optimised. Among the objective functions of interest is the minimisation of the difference between the largest and the smallest integers used (the span). In this article, we present an experimental study of local search algorithms for solving general and large size instances of the GSTCP. We compare the performance of previously known as well as new algorithms covering both simple construction heuristics and elaborated stochastic local search algorithms. We investigate systematically different models and search strategies in the algorithms and determine the best choices for different types of instance. The study is an example of design of effective local search for constraint optimisation problems.
Algorithm runtime prediction: Methods and evaluation
 Artificial Intelligence J
, 2014
"... Perhaps surprisingly, it is possible to predict how long an algorithm will take to run on a previously unseen input, using machine learning techniques to build a model of the algorithm’s runtime as a function of problemspecific instance features. Such models have important applications to algorithm ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
(Show Context)
Perhaps surprisingly, it is possible to predict how long an algorithm will take to run on a previously unseen input, using machine learning techniques to build a model of the algorithm’s runtime as a function of problemspecific instance features. Such models have important applications to algorithm analysis, portfoliobased algorithm selection, and the automatic configuration of parameterized algorithms. Over the past decade, a wide variety of techniques have been studied for building such models. Here, we describe extensions and improvements of existing models, new families of models, and— perhaps most importantly—a much more thorough treatment of algorithm parameters as model inputs. We also comprehensively describe new and existing features for predicting algorithm runtime for propositional satisfiability (SAT), travelling salesperson (TSP) and mixed integer programming (MIP) problems. We evaluate these innovations through the largest empirical analysis of its kind, comparing to a wide range of runtime modelling techniques from the literature. Our experiments consider 11 algorithms and 35 instance distributions; they also span a very wide range of SAT, MIP, and TSP instances, with the least structured having been generated uniformly at random and the most structured having emerged from real industrial applications. Overall, we demonstrate that our new models yield substantially better runtime predictions than previous approaches in terms of their generalization to new problem instances, to new algorithms from a parameterized space, and to both simultaneously.
Considerations of budget allocation for sequential parameter optimization (spo
 Workshop on Empirical Methods for the Analysis of Algorithms, Proceedings
, 2006
"... Abstract. Obviously, it is not a good idea to apply an optimization algorithm with wrongly specified parameter settings, a situation which can be avoided by applying algorithm tuning. Sequential tuning procedures are considered more efficient than singlestage procedures. [1] introduced a sequential ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Obviously, it is not a good idea to apply an optimization algorithm with wrongly specified parameter settings, a situation which can be avoided by applying algorithm tuning. Sequential tuning procedures are considered more efficient than singlestage procedures. [1] introduced a sequential approach for algorithm tuning that has been successfully applied to several realworld optimization tasks and experimental studies. The sequential procedure requires the specification of an initial sample size k. Small k values lead to poor models and thus poor predictions for the subsequent stages, whereas large values prevent an extensive search and local fine tuning. This study analyzes the interaction between global and local search in sequential tuning procedures and gives recommendations for an adequate budget allocation. Furthermore, the integration of hypothesis testing for increasing effectiveness of the latter phase is investigated. 1