Results 1  10
of
41
A TabuSearch Hyperheuristic for Timetabling and Rostering
, 2003
"... Hyperheuristics can be defined to be heuristics which choose between heuristics in order to solve a given optimisation problem. The main motivation behind the development of such approaches is the goal of developing automated scheduling methods which are not restricted to one problem. In this paper ..."
Abstract

Cited by 147 (61 self)
 Add to MetaCart
(Show Context)
Hyperheuristics can be defined to be heuristics which choose between heuristics in order to solve a given optimisation problem. The main motivation behind the development of such approaches is the goal of developing automated scheduling methods which are not restricted to one problem. In this paper we report the investigation of a hyperheuristic approach and evaluate it on various instances of two distinct timetabling and rostering problems. In the framework of our hyperheuristic approach, heuristics compete using rules based on the principles of reinforcement learning. A tabu list of heuristics is also maintained which prevents certain heuristics from being chosen at certain times during the search. We demonstrate that this tabusearch hyperheuristic is an easily reusable method which can produce solutions of at least acceptable quality across a variety of problems and instances. In effect the proposed method is capable of producing solutions that are competitive with those obtained using stateof theart problemspecific techniques for the problems studied here, but is fundamentally more general than those techniques.
Random search for hyperparameter optimization
 In: Journal of Machine Learning Research
"... Grid search and manual search are the most widely used strategies for hyperparameter optimization. This paper shows empirically and theoretically that randomly chosen trials are more efficient for hyperparameter optimization than trials on a grid. Empirical evidence comes from a comparison with a ..."
Abstract

Cited by 125 (16 self)
 Add to MetaCart
Grid search and manual search are the most widely used strategies for hyperparameter optimization. This paper shows empirically and theoretically that randomly chosen trials are more efficient for hyperparameter optimization than trials on a grid. Empirical evidence comes from a comparison with a large previous study that used grid search and manual search to configure neural networks and deep belief networks. Compared with neural networks configured by a pure grid search, we find that random search over the same domain is able to find models that are as good or better within a small fraction of the computation time. Granting random search the same computational budget, random search finds better models by effectively searching a larger, less promising configuration space. Compared with deep belief networks configured by a thoughtful combination of manual search and grid search, purely random search over the same 32dimensional configuration space found statistically equal performance on four of seven data sets, and superior performance on one of seven. A Gaussian process analysis of the function from hyperparameters to validation set performance reveals that for most data sets only a few of the hyperparameters really matter, but that different hyperparameters are important on different data sets. This phenomenon makes
A Classification of Hyperheuristic Approaches
"... The current state of the art in hyperheuristic research comprises a set of approaches that share the common goal of automating the design and adaptation of heuristic methods to solve hard computational search problems. The main goal is to produce more generally applicable search methodologies. In ..."
Abstract

Cited by 58 (21 self)
 Add to MetaCart
(Show Context)
The current state of the art in hyperheuristic research comprises a set of approaches that share the common goal of automating the design and adaptation of heuristic methods to solve hard computational search problems. The main goal is to produce more generally applicable search methodologies. In this chapter we present and overview of previous categorisations of hyperheuristics and provide a unified classification and definition which captures the work that is being undertaken in this field. We distinguish between two main hyperheuristic categories: heuristic selection and heuristic generation. Some representative examples of each category are discussed in detail. Our goal is to both clarify the main features of existing techniques and to suggest new directions for hyperheuristic research.
Exploring Hyperheuristic Methodologies with Genetic Programming
"... Hyperheuristics represent a novel search methodology that is motivated by the goal of automating the process of selecting or combining simpler heuristics in order to solve hard computational search problems. An extension of the original hyperheuristic idea is to generate new heuristics which are n ..."
Abstract

Cited by 34 (14 self)
 Add to MetaCart
(Show Context)
Hyperheuristics represent a novel search methodology that is motivated by the goal of automating the process of selecting or combining simpler heuristics in order to solve hard computational search problems. An extension of the original hyperheuristic idea is to generate new heuristics which are not currently known. These approaches operate on a search space of heuristics rather than directly on a search space of solutions to the underlying problem which is the case with most metaheuristics implementations. In the majority of hyperheuristic studies so far, a framework is provided with a set of human designed heuristics, taken from the literature, and with good measures of performance in practice. A less well studied approach aims to generate new heuristics from a set of potential heuristic components. The purpose of this chapter is to discuss this class of hyperheuristics, in which Genetic Programming is the most widely used methodology. A detailed discussion is presented including the steps needed to apply this technique, some representative case studies, a literature review of related work, and a discussion of relevant issues. Our aim is to convey the exciting potential of this innovative approach for automating the heuristic design process
Investigation of a Tabu Assisted HyperHeuristic Genetic Algorithm
 In proceedings of Congress on Evolutionary Computation(CEC2003
, 2003
"... AbstractThis paper investigates a tabu assisted genetic algorithm based hyperheuristic (hyperTGA) for personnel scheduling problems. We recently introduced a hyperheuristic genetic algorithm (hyperGA) with an adaptive length chromosome which aims to evolve an ordering of lowlevel heuristics in ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
(Show Context)
AbstractThis paper investigates a tabu assisted genetic algorithm based hyperheuristic (hyperTGA) for personnel scheduling problems. We recently introduced a hyperheuristic genetic algorithm (hyperGA) with an adaptive length chromosome which aims to evolve an ordering of lowlevel heuristics in order to find good quality solutions to given problems. The addition of a tabu method, the focus of this paper, extends that work. The aim of adding a tabu list to the hyperGA is to indicate the efficiency of each gene within the chromosome. We apply the algorithm to a geographically distributed training staff and course scheduling problem and compare the computational results with our previous hyperGA. 1.
Iterated local search vs. hyperheuristics: Towards generalpurpose search algorithms
 In IEEE Congress on Evolutionary Computation (CEC 2010
, 2010
"... Abstract — An important challenge within hyperheuristic research is to design search methodologies that work well, not only across different instances of the same problem, but also across different problem domains. This article conducts an empirical study involving three different domains in combin ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
(Show Context)
Abstract — An important challenge within hyperheuristic research is to design search methodologies that work well, not only across different instances of the same problem, but also across different problem domains. This article conducts an empirical study involving three different domains in combinatorial optimisation: bin packing, permutation flow shop and personnel scheduling. Using a common software interface (HyFlex), the same algorithms (highlevel strategies or hyperheuristics) can be readily run on all of them. The study is intended as a proof of concept of the proposed interface and domain modules, as a benchmark for testing the generalisation abilities of heuristic search algorithms. Several algorithms and variants from the literature were implemented and tested. From them, the implementation of iterated local search produced the best overall performance. Interestingly, this is one of the most conceptually simple competing algorithms, its advantage as a robust algorithm is probably due to two factors: (i) the simple yet powerful exploration/exploitation balance achieved by systematically combining a perturbation followed by local search; and (ii) its parameterless nature. We believe that the challenge is still open for the design of robust algorithms that can learn and adapt to the available lowlevel heuristics, and thus select and apply them accordingly. I.
Algorithm Selection for Combinatorial Search Problems: A Survey
, 2012
"... The Algorithm Selection Problem is concerned with selecting the best algorithm to solve a given problem on a casebycase basis. It has become especially relevant in the last decade, as researchers are increasingly investigating how to identify the most suitable existing algorithm for solving a prob ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
The Algorithm Selection Problem is concerned with selecting the best algorithm to solve a given problem on a casebycase basis. It has become especially relevant in the last decade, as researchers are increasingly investigating how to identify the most suitable existing algorithm for solving a problem instead of developing new algorithms. This survey presents an overview of this work focusing on the contributions made in the area of combinatorial search problems, where Algorithm Selection techniques have achieved significant performance improvements. We unify and organise the vast literature according to criteria that determine Algorithm Selection systems in practice. The comprehensive classification of approaches identifies and analyses the different directions from which Algorithm Selection has been approached. This paper contrasts and compares different methods for solving the problem as well as ways of using these solutions. It closes by identifying directions of current and future research.
A simulated annealing hyperheuristic methodology for flexible decision support
, 2007
"... One of the main motivations for investigating hyperheuristic methodologies is to provide a more general search framework than is currently available. Most of the current search techniques represent approaches that are largely adapted for specific search problems (and, in some cases, even specific ..."
Abstract

Cited by 18 (9 self)
 Add to MetaCart
One of the main motivations for investigating hyperheuristic methodologies is to provide a more general search framework than is currently available. Most of the current search techniques represent approaches that are largely adapted for specific search problems (and, in some cases, even specific problem instances). There are many realworld scenarios where the development of such bespoke systems is entirely appropriate. However, there are other situations where it would be beneficial to have methodologies which are more generally applicable to more problems. One of our motivating goals is to underpin the development of more flexible search methodologies that can be easily and automatically employed on a broader range of problems than is currently possible. Almost all the heuristics that have appeared in the literature have been designed and selected by humans. In this paper, we investigate a simulated annealing hyperheuristic methodology which operates on a search space of heuristics and which employs a stochastic heuristic selection strategy and a shortterm memory. The generality and performance of the proposed algorithm is demonstrated over a large number of benchmark data sets drawn from three very different and difficult (NPhard) problems: nurse rostering, university course timetabling and onedimensional bin packing. Experimental results show that the proposed hyperheuristic is able to achieve significant performance improvements over a recently proposed tabu search hyperheuristic without lowering the level of generality. We
An Adaptive Length Chromosome Hyperheuristic Genetic Algorithm for a Trainer Scheduling Problem
 Proceedings of the fourth AsiaPacific Conference on Simulated Evolution And Learning, (SEAL'02), Orchid Country Club, Singapore, 1822 Nov 2002
"... HyperGA was introduced by the authors as a genetic algorithm based hyperheuristic which aims to evolve an ordering of lowlevel heuristics so as to find a good quality solution to a given problem. The adaptive length chromosome hyperGA, let's call it ALChyperGA, is an extension of the author ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
HyperGA was introduced by the authors as a genetic algorithm based hyperheuristic which aims to evolve an ordering of lowlevel heuristics so as to find a good quality solution to a given problem. The adaptive length chromosome hyperGA, let's call it ALChyperGA, is an extension of the authors previous work, in which the chromosome was of fixed length. The aim of a variable length chromosome is two fold; 1) it allows dynamic removal and insertion of heuristics 2) it allows the GA to find a good chromosome length which could otherwise only be found by experimentation. We apply the ALChyperGA to a geographically distributed training staff and courses scheduling problem, and report that good quality solution can be found. We also present results for four versions of the ALChyperGA, applied to five test data sets.
Guided Operators for a HyperHeuristic Genetic Algorithm
 IN PROCEEDINGS OF AI2003: ADVANCES IN ARTIFICIAL INTELLIGENCE. THE 16TH AUSTRALIAN CONFERENCE ON ARTIFICIAL INTELLIGENCE (AI’03) (EDS TAMS D GEDEON AND LANCE CHUN CHE FUNG
, 2003
"... We have recently introduced a hyperheuristic genetic algorithm (hyperGA) with an adaptive length chromosome which aims to evolve an ordering of lowlevel heuristics so as to find good quality solutions to given problems. The guided mutation and crossover hyperGA, the focus of this paper, exte ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
We have recently introduced a hyperheuristic genetic algorithm (hyperGA) with an adaptive length chromosome which aims to evolve an ordering of lowlevel heuristics so as to find good quality solutions to given problems. The guided mutation and crossover hyperGA, the focus of this paper, extends that work. The aim of a guided hyperGA is to make the dynamic removal and insertion of heuristics more efficient, and evolve sequences of heuristics in order to produce promising solutions more effectively. We apply the algorithm to a geographically distributed training staff and course scheduling problem to compare the computational result with the application of other hyperGAs. In order to show the robustness of hyperGAs, we apply our methods to a student project presentation scheduling problem in a UK university and compare results with the application of another hyperheuristic method.