Results 1 
3 of
3
Experimental evaluation of heuristic optimization algorithms: A tutorial
 Journal of Heuristics
, 2001
"... Heuristic optimization algorithms seek good feasible solutions to optimization problems in circumstances where the complexities of the problem or the limited time available for solution do not allow exact solution. Although worst case and probabilistic analysis of algorithms have produced insight on ..."
Abstract

Cited by 48 (0 self)
 Add to MetaCart
Heuristic optimization algorithms seek good feasible solutions to optimization problems in circumstances where the complexities of the problem or the limited time available for solution do not allow exact solution. Although worst case and probabilistic analysis of algorithms have produced insight on some classic models, most of the heuristics developed for large optimization problem must be evaluated empiricallyâ€”by applying procedures to a collection of specific instances and comparing the observed solution quality and computational burden. This paper focuses on the methodological issues that must be confronted by researchers undertaking such experimental evaluations of heuristics, including experimental design, sources of test instances, measures of algorithmic performance, analysis of results, and presentation in papers and talks. The questions are difficult, and there are no clear right answers. We seek only to highlight the main issues, present alternative ways of addressing them under different circumstances, and caution about pitfalls to avoid. Key Words: Heuristic optimization, computational experiments 1.
A generalpurpose tunable landscape generator
 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
, 2006
"... The research literature on metaheuristic and evolutionary computation has proposed a large number of algorithms for the solution of challenging realworld optimization problems. It is often not possible to study theoretically the performance of these algorithms unless significant assumptions are ma ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
The research literature on metaheuristic and evolutionary computation has proposed a large number of algorithms for the solution of challenging realworld optimization problems. It is often not possible to study theoretically the performance of these algorithms unless significant assumptions are made on either the algorithm itself or the problems to which it is applied, or both. As a consequence, metaheuristics are typically evaluated empirically using a set of test problems. Unfortunately, relatively little attention has been given to the development of methodologies and tools for the largescale empirical evaluation and/or comparison of metaheuristics. In this paper, we propose a landscape (testproblem) generator that can be used to generate optimization problem instances for continuous, boundconstrained optimization problems. The landscape generator is parameterized by a small number of parameters, and the values of these parameters have a direct and intuitive interpretation in terms of the geometric features of the landscapes that they produce. An experimental space is defined over algorithms and problems, via a tuple of parameters for any specified algorithm and problem class (here determined by the landscape generator). An experiment is then clearly specified as a point in this space, in a way that is analogous to other areas of experimental algorithmics, and more generally in experimental design. Experimental results are presented, demonstrating the use of the landscape generator. In particular, we analyze some simple, continuous estimation of distribution algorithms, and gain new insights into the behavior of these algorithms using the landscape generator.
Some Experimental and Theoretical Results on . . .
, 1995
"... We describe and analyze test case generators for the maximum clique problem (or equivalently for the maximum independent set or vertex cover problems). The generators produce graphs with specified number of vertices and edges, and known maximum clique size. The experimental hardness of the test case ..."
Abstract
 Add to MetaCart
We describe and analyze test case generators for the maximum clique problem (or equivalently for the maximum independent set or vertex cover problems). The generators produce graphs with specified number of vertices and edges, and known maximum clique size. The experimental hardness of the test cases is evaluated in relation to several heuristics for the maximum clique problem, based on neural networks, and derived from the work of A. Jagota. Our results show that the hardness of the graphs produced by this method depends in a crucial way on the construction parameters; for a given edge density, challenging graphs can only be constructed using this method for a certain range of maximum clique values; the location of this range depends on the expected maximum clique size for random graphs of that density; the size of the range depends on the density of the graph. We also show that one of the algorithms, based on reinforcement learning techniques, has more success than the others at solving the test cases produced by the generators. In addition, NPcompleteness reductions are presented showing that (in spite of what might be suggested by the results just mentioned) the maximum clique problem remains NPhard even if the domain is restricted to graphs having a constant edge density and a constant ratio of the