Results 1 
5 of
5
Parameterized AverageCase Complexity of the Hypervolume Indicator
"... The hypervolume indicator (HYP) is a popular measure for the quality of a set of n solutions in R d. We discuss its asymptotic worstcase runtimes and several lower bounds depending on different complexitytheoretic assumptions. Assuming that P = NP, there is no algorithm with runtime poly(n, d). A ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
The hypervolume indicator (HYP) is a popular measure for the quality of a set of n solutions in R d. We discuss its asymptotic worstcase runtimes and several lower bounds depending on different complexitytheoretic assumptions. Assuming that P = NP, there is no algorithm with runtime poly(n, d). Assuming the exponential time hypothesis, there is no algorithm with runtime n o(d). In contrast to these worstcase lower bounds, we study the averagecase complexity of HYP for points distributed i.i.d. at random on a ddimensional simplex. We present a general framework which translates any algorithm for HYP with worstcase runtime n f(d) to an algorithm with worstcase runtime n f(d)+1 and fixedparametertractable (FPT) averagecase runtime. This can be used to show that HYP can be solved in expected time O(d d2 /2 n + d n 2), which implies that HYP is FPT on average while it is W[1]hard in the worstcase. For constant dimension d this gives an algorithm for HYP with runtime O(n 2) on average. This is the first result proving that HYP is asymptotically easier in the average case. It gives a theoretical explanation why most HYP algorithms perform much better on average than their theoretical worstcase runtime predicts.
Succinct Sampling from Discrete Distributions
"... We revisit the classic problem of sampling from a discrete distribution: Given n nonnegative wbit integers x1,..., xn, the task is to build a data structure that allows sampling i with probability proportional to xi. The classic solution is Walker’s alias method that takes, when implemented on a W ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We revisit the classic problem of sampling from a discrete distribution: Given n nonnegative wbit integers x1,..., xn, the task is to build a data structure that allows sampling i with probability proportional to xi. The classic solution is Walker’s alias method that takes, when implemented on a Word RAM, O(n) preprocessing time, O(1) expected query time for one sample, and n(w+2 lg n+o(1)) bits of space. Using the terminology of succinct data structures, this solution has redundancy 2n lg n + o(n) bits, i.e., it uses 2n lg n + o(n) bits in addition to the information theoretic minimum required for storing the input. In this paper, we study whether this space usage can be improved. In the systematic case, in which the input is readonly, we present a novel data structure using r + O(w) redundant
Speeding Up ManyObjective Optimization by Monte Carlo Approximations
, 2013
"... Many stateoftheart evolutionary vector optimization algorithms compute the contributing hypervolume for ranking candidate solutions. However, with an increasing number of objectives, calculating the volumes becomes intractable. Therefore, although hypervolumebased algorithms are often the method ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Many stateoftheart evolutionary vector optimization algorithms compute the contributing hypervolume for ranking candidate solutions. However, with an increasing number of objectives, calculating the volumes becomes intractable. Therefore, although hypervolumebased algorithms are often the method of choice for bicriteria optimization, they are regarded as not suitable for manyobjective optimization. Recently, Monte Carlo methods have been derived and analyzed for approximating the contributing hypervolume. Turning theory into practice, we employ these results in the ranking procedure of the multiobjective covariance matrix adaptation evolution strategy (MOCMAES) as an example of a stateoftheart method for vector optimization. It is empirically shown that the approximation does not impair the quality of the obtained solutions given a budget of objective function evaluations, while considerably reducing the computation time in the case of multiple objectives. These results are obtained on common benchmark functions as well as on two design optimization tasks. Thus, employing Monte Carlo approximations makes hypervolumebased algorithms applicable to manyobjective optimization.
Efficient Parent Selection for ApproximationGuided Evolutionary MultiObjective Optimization
"... Abstract—The Pareto front of a multiobjective optimization problem is typically very large and can only be approximated. ApproximationGuided Evolution (AGE) is a recently presented evolutionary multiobjective optimization algorithm that aims at minimizing iteratively the approximation factor, whi ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract—The Pareto front of a multiobjective optimization problem is typically very large and can only be approximated. ApproximationGuided Evolution (AGE) is a recently presented evolutionary multiobjective optimization algorithm that aims at minimizing iteratively the approximation factor, which measures how well the current population approximates the Pareto front. It outperforms stateoftheart algorithms for problems with many objectives. However, AGE’s performance is not competitive on problems with very few objectives. We study the reason for this behavior and observe that AGE selects parents uniformly at random, which has a detrimental effect on its performance. We then investigate different algorithmspecific selection strategies for AGE. The main difficulty here is finding a computationally efficient selection scheme which does not harm AGEs linear runtime in the number of objectives. We present several improved selections schemes that are computationally efficient and substantially improve AGE on lowdimensional objective spaces, but have no negative effect in highdimensional objective spaces. I.
Multiplicative Approximations, Optimal Hypervolume Distributions, and the Choice of the Reference Point
, 2014
"... Many optimization problems arising in applications have to consider several objective functions at the same time. Evolutionary algorithms seem to be a very natural choice for dealing with multiobjective problems as the population of such an algorithm can be used to represent the tradeoffs with res ..."
Abstract
 Add to MetaCart
Many optimization problems arising in applications have to consider several objective functions at the same time. Evolutionary algorithms seem to be a very natural choice for dealing with multiobjective problems as the population of such an algorithm can be used to represent the tradeoffs with respect to the given objective functions. In this paper, we contribute to the theoretical understanding of evolutionary algorithms for multiobjective problems. We consider indicatorbased algorithms whose goal is to maximize the hypervolume for a given problem by distributing µ points on the Pareto front. To gain new theoretical insights into the behavior of hypervolumebased algorithms we compare their optimization goal to the goal of achieving an optimal multiplicative approximation ratio. Our studies are carried out for different Pareto front shapes of biobjective problems. For the class of linear fronts and a class of convex fronts, we prove that maximizing the hypervolume gives the best possible approximation ratio when assuming that the extreme points have to be included in both distributions of the points on the Pareto front. Furthermore, we investigate the choice of the reference point on the approximation behavior of hypervolumebased approaches and examine Pareto fronts of different shapes by numerical calculations.