Results 1 -
4 of
4
Grammar-based generation of variable-selection heuristics for constraint satisfaction problems
, 2014
"... Abstract We propose a grammar-based genetic programming framework that generates variable-selection heuristics for solving constraint satisfaction problems. This approach can be considered as a generation hyper-heuristic. A grammar to express heuristics is extracted from successful human-designed va ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract We propose a grammar-based genetic programming framework that generates variable-selection heuristics for solving constraint satisfaction problems. This approach can be considered as a generation hyper-heuristic. A grammar to express heuristics is extracted from successful human-designed variable-selection heuristics. The search is performed on the derivation sequences of this grammar using a strongly typed genetic programming framework. The approach brings two innovations to grammar-based hyper-heuristics in this domain: the incorporation of if-then-else rules to the function set, and the implementation of overloaded func-tions capable of handling different input dimensionality. Moreover, the heuristic search space is explored using not only evolutionary search, but also two alternative simpler strategies, namely, iterated local search and parallel hill climbing. We tested our approach on synthetic and real-world instances. The newly generated heuristics have an improved performance when compared against human-designed heuristics. Our results suggest that the constrained search space imposed by the proposed grammar is the main factor in the generation of good heuristics. However, to generate more general heuristics, the composition of the training set and the search methodology played an important role. We found that increasing the variability of the training set improved the generality of the evolved heuristics, and the evolu-tionary search strategy produced slightly better results.
Improving Performance of a Hyper-heuristic Using a Multilayer Perceptron for Vehicle Routing
"... Abstract—A hyper-heuristic is a heuristic optimisation method which generates or selects heuristics (move operators) based on a set of components while solving a computationally difficult problem. Apprenticeship learning arises while observing the behaviour of an expert in action. In this study, we ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—A hyper-heuristic is a heuristic optimisation method which generates or selects heuristics (move operators) based on a set of components while solving a computationally difficult problem. Apprenticeship learning arises while observing the behaviour of an expert in action. In this study, we use a multilayer perceptron (MLP) as an apprenticeship learning algorithm to improve upon the performance of a state-of-the-art selection hyper-heuristic used as an expert, which was the winner of a cross-domain heuristic search challenge (CHeSC 2011). We collect data based on the relevant actions of the expert while solving selected vehicle routing problem instances from CHeSC 2011. Then an MLP is trained using this data to build a selection hyper-heuristic consisting of a number classifiers for heuristic selection, parameter control and move acceptance. The generated selection hyper-heuristic is tested on the unseen vehicle routing problem instances. The empirical results indicate the success of MLP-based hyper-heuristic achieving a better performance than the expert and some previously proposed algorithms. I.
Benchmarks That Matter For Genetic Programming
"... There have been several papers published relating to the practice of benchmarking in machine learning and Genetic Programming (GP) in particular. In addition, GP has been accused of targeting over-simplified ‘toy ’ problems that do not reflect the complexity of real-world applications that GP is ult ..."
Abstract
- Add to MetaCart
(Show Context)
There have been several papers published relating to the practice of benchmarking in machine learning and Genetic Programming (GP) in particular. In addition, GP has been accused of targeting over-simplified ‘toy ’ problems that do not reflect the complexity of real-world applications that GP is ultimately intended. There are also theoretical results that relate the performance of an algorithm with a proba-bility distribution over problem instances, and so the current debate concerning benchmarks spans from the theoretical to the empirical. The aim of this article is to consolidate an emerging theme arising from these papers and suggest that benchmarks should not be arbitrarily selected but should instead be drawn from an underlying probability distribution that reflects the prob-