Results 1  10
of
11
Approximation Quality of the Hypervolume Indicator
, 2012
"... In order to allow a comparison of (otherwise incomparable) sets, many evolutionary multiobjective optimizers use indicator functions to guide the search and to evaluate the performance of search algorithms. The most widely used indicator is the hypervolume indicator. It measures the volume of the do ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
In order to allow a comparison of (otherwise incomparable) sets, many evolutionary multiobjective optimizers use indicator functions to guide the search and to evaluate the performance of search algorithms. The most widely used indicator is the hypervolume indicator. It measures the volume of the dominated portion of the objective space bounded from below by a reference point. Though the hypervolume indicator is very popular, it has not been shown that maximizing the hypervolume indicator of sets of bounded size is indeed equivalent to the overall objective of finding a good approximation of the Pareto front. To address this question, we compare the optimal approximation ratio with the approximation ratio achieved by twodimensional sets maximizing the hypervolume indicator. We bound the optimal multiplicative approximation ratio of n points by 1+Θ(1/n) for arbitrary Pareto fronts. Furthermore, we prove that the same asymptotic approximation ratio is achieved by sets of n points that maximize the hypervolume indicator. However, there is a provable gap between the two approximation ratios which is even exponential in the ratio between the largest and the smallest value of the front. We also examine the additive approximation ratio of the hypervolume indicator in two dimensions and prove that it achieves the optimal additive approximation ratio apart from a small ratio � n/(n−2), where n is the size of the population. Hence the hypervolume indicator can be used to achieve a good additive but not a good multiplicative approximation of a Pareto front. This motivates the introduction of a “logarithmic hypervolume indicator ” which provably achieves a good multiplicative approximation ratio.
Convergence of HypervolumeBased Archiving Algorithms II: Competitiveness
"... We study the convergence behavior of (µ + λ)archiving algorithms. A (µ + λ)archiving algorithm defines how to choose in each generationµchildren fromµparents andλoffspring together. Archiving algorithms have to choose individuals online without knowing future offspring. Previous studies assumed th ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
We study the convergence behavior of (µ + λ)archiving algorithms. A (µ + λ)archiving algorithm defines how to choose in each generationµchildren fromµparents andλoffspring together. Archiving algorithms have to choose individuals online without knowing future offspring. Previous studies assumed the offspring generation to be bestcase. We assume the initial population and the offspring generation to be worstcase and use the competitive ratio to measure how much smaller hypervolumes an archiving algorithm finds due to not knowing the future in advance. We prove that all archiving algorithms which increase the hypervolume in each step (if they can) are only µcompetitive. We also present a new archiving algorithm which is (4+2/µ)competitive. This algorithm not only achieves a constant competitive ratio, but is also efficiently computable. Both properties provably do not hold for the commonly used greedy archiving algorithms, for example those used in SIBEA, SMSEMOA, or the generational MOCMAES.
Parameterized AverageCase Complexity of the Hypervolume Indicator
"... The hypervolume indicator (HYP) is a popular measure for the quality of a set of n solutions in R d. We discuss its asymptotic worstcase runtimes and several lower bounds depending on different complexitytheoretic assumptions. Assuming that P = NP, there is no algorithm with runtime poly(n, d). A ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
The hypervolume indicator (HYP) is a popular measure for the quality of a set of n solutions in R d. We discuss its asymptotic worstcase runtimes and several lower bounds depending on different complexitytheoretic assumptions. Assuming that P = NP, there is no algorithm with runtime poly(n, d). Assuming the exponential time hypothesis, there is no algorithm with runtime n o(d). In contrast to these worstcase lower bounds, we study the averagecase complexity of HYP for points distributed i.i.d. at random on a ddimensional simplex. We present a general framework which translates any algorithm for HYP with worstcase runtime n f(d) to an algorithm with worstcase runtime n f(d)+1 and fixedparametertractable (FPT) averagecase runtime. This can be used to show that HYP can be solved in expected time O(d d2 /2 n + d n 2), which implies that HYP is FPT on average while it is W[1]hard in the worstcase. For constant dimension d this gives an algorithm for HYP with runtime O(n 2) on average. This is the first result proving that HYP is asymptotically easier in the average case. It gives a theoretical explanation why most HYP algorithms perform much better on average than their theoretical worstcase runtime predicts.
Succinct Sampling from Discrete Distributions
"... We revisit the classic problem of sampling from a discrete distribution: Given n nonnegative wbit integers x1,..., xn, the task is to build a data structure that allows sampling i with probability proportional to xi. The classic solution is Walker’s alias method that takes, when implemented on a W ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We revisit the classic problem of sampling from a discrete distribution: Given n nonnegative wbit integers x1,..., xn, the task is to build a data structure that allows sampling i with probability proportional to xi. The classic solution is Walker’s alias method that takes, when implemented on a Word RAM, O(n) preprocessing time, O(1) expected query time for one sample, and n(w+2 lg n+o(1)) bits of space. Using the terminology of succinct data structures, this solution has redundancy 2n lg n + o(n) bits, i.e., it uses 2n lg n + o(n) bits in addition to the information theoretic minimum required for storing the input. In this paper, we study whether this space usage can be improved. In the systematic case, in which the input is readonly, we present a novel data structure using r + O(w) redundant
An Evolutionary ManyObjective Optimization Algorithm Based on Dominance and Decomposition
, 2015
"... Achieving balance between convergence and diversity is a key issue in evolutionary multiobjective optimization. Most existing methodologies, which have demonstrated their niche on various practical problems involving two and three objectives, face significant challenges in manyobjective optimizati ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Achieving balance between convergence and diversity is a key issue in evolutionary multiobjective optimization. Most existing methodologies, which have demonstrated their niche on various practical problems involving two and three objectives, face significant challenges in manyobjective optimization. This paper suggests a unified paradigm, which combines dominanceand decompositionbased approaches, for manyobjective optimization. Our major purpose is to exploit the merits of both dominance and decompositionbased approaches to balance the convergence and diversity of the evolutionary process. The performance of our proposed method is validated and compared with four stateoftheart algorithms on a number of unconstrained benchmark problems with up to 15 objectives. Empirical results fully demonstrate the superiority of our proposed method on all considered test instances. In addition, we extend this method to solve constrained problems having a large number of objectives. Compared to two other recently proposed constrained optimizers, our proposed method shows highly competitive performance on all the constrained optimization problems.
Fast calculation of multiobjective probability of improvement and expected improvement criteria for Pareto optimization
 J GLOB OPTIM
, 2014
"... ..."
Speeding Up ManyObjective Optimization by Monte Carlo Approximations
, 2013
"... Many stateoftheart evolutionary vector optimization algorithms compute the contributing hypervolume for ranking candidate solutions. However, with an increasing number of objectives, calculating the volumes becomes intractable. Therefore, although hypervolumebased algorithms are often the method ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Many stateoftheart evolutionary vector optimization algorithms compute the contributing hypervolume for ranking candidate solutions. However, with an increasing number of objectives, calculating the volumes becomes intractable. Therefore, although hypervolumebased algorithms are often the method of choice for bicriteria optimization, they are regarded as not suitable for manyobjective optimization. Recently, Monte Carlo methods have been derived and analyzed for approximating the contributing hypervolume. Turning theory into practice, we employ these results in the ranking procedure of the multiobjective covariance matrix adaptation evolution strategy (MOCMAES) as an example of a stateoftheart method for vector optimization. It is empirically shown that the approximation does not impair the quality of the obtained solutions given a budget of objective function evaluations, while considerably reducing the computation time in the case of multiple objectives. These results are obtained on common benchmark functions as well as on two design optimization tasks. Thus, employing Monte Carlo approximations makes hypervolumebased algorithms applicable to manyobjective optimization.
Efficient Parent Selection for ApproximationGuided Evolutionary MultiObjective Optimization
"... Abstract—The Pareto front of a multiobjective optimization problem is typically very large and can only be approximated. ApproximationGuided Evolution (AGE) is a recently presented evolutionary multiobjective optimization algorithm that aims at minimizing iteratively the approximation factor, whi ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract—The Pareto front of a multiobjective optimization problem is typically very large and can only be approximated. ApproximationGuided Evolution (AGE) is a recently presented evolutionary multiobjective optimization algorithm that aims at minimizing iteratively the approximation factor, which measures how well the current population approximates the Pareto front. It outperforms stateoftheart algorithms for problems with many objectives. However, AGE’s performance is not competitive on problems with very few objectives. We study the reason for this behavior and observe that AGE selects parents uniformly at random, which has a detrimental effect on its performance. We then investigate different algorithmspecific selection strategies for AGE. The main difficulty here is finding a computationally efficient selection scheme which does not harm AGEs linear runtime in the number of objectives. We present several improved selections schemes that are computationally efficient and substantially improve AGE on lowdimensional objective spaces, but have no negative effect in highdimensional objective spaces. I.
T.: Towards efficient multiobjective optimization: multiobjective statistical criterions
 In: IEEE World Congress on Computational Intelligence
, 2012
"... is widely spread in engineering design to reduce the number of computational expensive simulations. However, “realworld” problems often consist of multiple, conflicting objectives leading to a set of equivalent solutions (the Pareto front). The objectives are often aggregated into a single cost fun ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
is widely spread in engineering design to reduce the number of computational expensive simulations. However, “realworld” problems often consist of multiple, conflicting objectives leading to a set of equivalent solutions (the Pareto front). The objectives are often aggregated into a single cost function to reduce the computational cost, though a better approach is to use multiobjective optimization methods to directly identify a set of Paretooptimal solutions, which can be used by the designer to make more efficient design decisions (instead of making those decisions upfront). Most of the work in multiobjective optimization is focused on MultiObjective Evolutionary Algorithms (MOEAs). While MOEAs are wellsuited to handle large, intractable design spaces, they typically require thousands of expensive simulations, which is prohibitively expensive for the problems under study. Therefore, the use of surrogate models in multiobjective optimization, denoted as MultiObjective SurrogateBased Optimization (MOSBO), may prove to be even more worthwhile than SBO methods to expedite the optimization process. In this paper, the authors propose the Efficient Multiobjective Optimization (EMO) algorithm which uses Kriging models and multiobjective versions of the expected improvement and probability of improvement criterions to identify the Pareto front with a minimal number of expensive simulations. The EMO algorithm is applied on multiple standard benchmark problems and compared against the wellknown NSGAII and SPEA2 multiobjective optimization methods with promising results. Index Terms—multiobjective optimization, Kriging, expected improvement, probability of improvement