Results 1 - 10
of
11
Approximation Quality of the Hypervolume Indicator
, 2012
"... In order to allow a comparison of (otherwise incomparable) sets, many evolutionary multiobjective optimizers use indicator functions to guide the search and to evaluate the performance of search algorithms. The most widely used indicator is the hypervolume indicator. It measures the volume of the do ..."
Abstract
-
Cited by 5 (5 self)
- Add to MetaCart
In order to allow a comparison of (otherwise incomparable) sets, many evolutionary multiobjective optimizers use indicator functions to guide the search and to evaluate the performance of search algorithms. The most widely used indicator is the hypervolume indicator. It measures the volume of the dominated portion of the objective space bounded from below by a reference point. Though the hypervolume indicator is very popular, it has not been shown that maximizing the hypervolume indicator of sets of bounded size is indeed equivalent to the overall objective of finding a good approximation of the Pareto front. To address this question, we compare the optimal approximation ratio with the approximation ratio achieved by two-dimensional sets maximizing the hypervolume indicator. We bound the optimal multiplicative approximation ratio of n points by 1+Θ(1/n) for arbitrary Pareto fronts. Furthermore, we prove that the same asymptotic approximation ratio is achieved by sets of n points that maximize the hypervolume indicator. However, there is a provable gap between the two approximation ratios which is even exponential in the ratio between the largest and the smallest value of the front. We also examine the additive approximation ratio of the hypervolume indicator in two dimensions and prove that it achieves the optimal additive approximation ratio apart from a small ratio � n/(n−2), where n is the size of the population. Hence the hypervolume indicator can be used to achieve a good additive but not a good multiplicative approximation of a Pareto front. This motivates the introduction of a “logarithmic hypervolume indicator ” which provably achieves a good multiplicative approximation ratio.
Convergence of Hypervolume-Based Archiving Algorithms II: Competitiveness
"... We study the convergence behavior of (µ + λ)-archiving algorithms. A (µ + λ)-archiving algorithm defines how to choose in each generationµchildren fromµparents andλoffspring together. Archiving algorithms have to choose individuals online without knowing future offspring. Previous studies assumed th ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
(Show Context)
We study the convergence behavior of (µ + λ)-archiving algorithms. A (µ + λ)-archiving algorithm defines how to choose in each generationµchildren fromµparents andλoffspring together. Archiving algorithms have to choose individuals online without knowing future offspring. Previous studies assumed the offspring generation to be best-case. We assume the initial population and the offspring generation to be worst-case and use the competitive ratio to measure how much smaller hypervolumes an archiving algorithm finds due to not knowing the future in advance. We prove that all archiving algorithms which increase the hypervolume in each step (if they can) are only µ-competitive. We also present a new archiving algorithm which is (4+2/µ)-competitive. This algorithm not only achieves a constant competitive ratio, but is also efficiently computable. Both properties provably do not hold for the commonly used greedy archiving algorithms, for example those used in SIBEA, SMS-EMOA, or the generational MO-CMA-ES.
Parameterized Average-Case Complexity of the Hypervolume Indicator
"... The hypervolume indicator (HYP) is a popular measure for the quality of a set of n solutions in R d. We discuss its asymptotic worst-case runtimes and several lower bounds depending on different complexity-theoretic assumptions. Assuming that P = NP, there is no algorithm with runtime poly(n, d). A ..."
Abstract
-
Cited by 3 (3 self)
- Add to MetaCart
(Show Context)
The hypervolume indicator (HYP) is a popular measure for the quality of a set of n solutions in R d. We discuss its asymptotic worst-case runtimes and several lower bounds depending on different complexity-theoretic assumptions. Assuming that P = NP, there is no algorithm with runtime poly(n, d). Assuming the exponential time hypothesis, there is no algorithm with runtime n o(d). In contrast to these worst-case lower bounds, we study the averagecase complexity of HYP for points distributed i.i.d. at random on a d-dimensional simplex. We present a general framework which translates any algorithm for HYP with worst-case runtime n f(d) to an algorithm with worst-case runtime n f(d)+1 and fixed-parametertractable (FPT) average-case runtime. This can be used to show that HYP can be solved in expected time O(d d2 /2 n + d n 2), which implies that HYP is FPT on average while it is W[1]-hard in the worst-case. For constant dimension d this gives an algorithm for HYP with runtime O(n 2) on average. This is the first result proving that HYP is asymptotically easier in the average case. It gives a theoretical explanation why most HYP algorithms perform much better on average than their theoretical worst-case runtime predicts.
Succinct Sampling from Discrete Distributions
"... We revisit the classic problem of sampling from a discrete distribution: Given n non-negative w-bit integers x1,..., xn, the task is to build a data structure that allows sampling i with probability proportional to xi. The classic solution is Walker’s alias method that takes, when implemented on a W ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
We revisit the classic problem of sampling from a discrete distribution: Given n non-negative w-bit integers x1,..., xn, the task is to build a data structure that allows sampling i with probability proportional to xi. The classic solution is Walker’s alias method that takes, when implemented on a Word RAM, O(n) preprocessing time, O(1) expected query time for one sample, and n(w+2 lg n+o(1)) bits of space. Using the terminology of succinct data structures, this solution has redundancy 2n lg n + o(n) bits, i.e., it uses 2n lg n + o(n) bits in addition to the information theoretic minimum required for storing the input. In this paper, we study whether this space usage can be improved. In the systematic case, in which the input is read-only, we present a novel data structure using r + O(w) redundant
An Evolutionary Many-Objective Optimization Algorithm Based on Dominance and Decomposition
, 2015
"... Achieving balance between convergence and diversity is a key issue in evolutionary multiobjective optimization. Most existing methodologies, which have demonstrated their niche on various practical problems involving two and three objectives, face significant challenges in many-objective optimizati ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
Achieving balance between convergence and diversity is a key issue in evolutionary multiobjective optimization. Most existing methodologies, which have demonstrated their niche on various practical problems involving two and three objectives, face significant challenges in many-objective optimization. This paper suggests a unified paradigm, which combines dominance-and decomposition-based approaches, for many-objective optimization. Our major purpose is to exploit the merits of both dominance- and decomposition-based approaches to balance the convergence and diversity of the evolutionary process. The performance of our proposed method is validated and compared with four state-of-the-art algorithms on a number of unconstrained benchmark problems with up to 15 objectives. Empirical results fully demonstrate the superiority of our proposed method on all considered test instances. In addition, we extend this method to solve constrained problems having a large number of objectives. Compared to two other recently proposed constrained optimizers, our proposed method shows highly competitive performance on all the constrained optimization problems.
Fast calculation of multiobjective probability of improvement and expected improvement criteria for Pareto optimization
- J GLOB OPTIM
, 2014
"... ..."
Speeding Up Many-Objective Optimization by Monte Carlo Approximations
, 2013
"... Many state-of-the-art evolutionary vector optimization algorithms compute the contributing hypervolume for ranking candidate solutions. However, with an increasing number of objectives, calculating the volumes becomes intractable. Therefore, although hypervolume-based algorithms are often the method ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
Many state-of-the-art evolutionary vector optimization algorithms compute the contributing hypervolume for ranking candidate solutions. However, with an increasing number of objectives, calculating the volumes becomes intractable. Therefore, although hypervolume-based algorithms are often the method of choice for bi-criteria optimization, they are regarded as not suitable for manyobjective optimization. Recently, Monte Carlo methods have been derived and analyzed for approximating the contributing hypervolume. Turning theory into practice, we employ these results in the ranking procedure of the multi-objective covariance matrix adaptation evolution strategy (MO-CMA-ES) as an example of a state-of-the-art method for vector optimization. It is empirically shown that the approximation does not impair the quality of the obtained solutions given a budget of objective function evaluations, while considerably reducing the computation time in the case of multiple objectives. These results are obtained on common benchmark functions as well as on two design optimization tasks. Thus, employing Monte Carlo approximations makes hypervolume-based algorithms applicable to many-objective optimization.
Efficient Parent Selection for Approximation-Guided Evolutionary Multi-Objective Optimization
"... Abstract—The Pareto front of a multi-objective optimization problem is typically very large and can only be approximated. Approximation-Guided Evolution (AGE) is a recently presented evolutionary multi-objective optimization algorithm that aims at minimizing iteratively the approximation factor, whi ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Abstract—The Pareto front of a multi-objective optimization problem is typically very large and can only be approximated. Approximation-Guided Evolution (AGE) is a recently presented evolutionary multi-objective optimization algorithm that aims at minimizing iteratively the approximation factor, which measures how well the current population approximates the Pareto front. It outperforms state-of-the-art algorithms for problems with many objectives. However, AGE’s performance is not competitive on problems with very few objectives. We study the reason for this behavior and observe that AGE selects parents uniformly at random, which has a detrimental effect on its performance. We then investigate different algorithm-specific selection strategies for AGE. The main difficulty here is finding a computationally efficient selection scheme which does not harm AGEs linear runtime in the number of objectives. We present several improved selections schemes that are computationally efficient and substantially improve AGE on low-dimensional objective spaces, but have no negative effect in high-dimensional objective spaces. I.
T.: Towards efficient multiobjective optimization: multiobjective statistical criterions
- In: IEEE World Congress on Computational Intelligence
, 2012
"... is widely spread in engineering design to reduce the number of computational expensive simulations. However, “real-world” problems often consist of multiple, conflicting objectives leading to a set of equivalent solutions (the Pareto front). The objectives are often aggregated into a single cost fun ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
is widely spread in engineering design to reduce the number of computational expensive simulations. However, “real-world” problems often consist of multiple, conflicting objectives leading to a set of equivalent solutions (the Pareto front). The objectives are often aggregated into a single cost function to reduce the computational cost, though a better approach is to use multiob-jective optimization methods to directly identify a set of Pareto-optimal solutions, which can be used by the designer to make more efficient design decisions (instead of making those decisions upfront). Most of the work in multiobjective optimization is focused on MultiObjective Evolutionary Algorithms (MOEAs). While MOEAs are well-suited to handle large, intractable design spaces, they typically require thousands of expensive simulations, which is prohibitively expensive for the problems under study. Therefore, the use of surrogate models in multiobjective optimiza-tion, denoted as MultiObjective Surrogate-Based Optimization (MOSBO), may prove to be even more worthwhile than SBO methods to expedite the optimization process. In this paper, the authors propose the Efficient Multiobjective Optimization (EMO) algorithm which uses Kriging models and multiobjective versions of the expected improvement and probability of improvement criterions to identify the Pareto front with a minimal number of expensive simulations. The EMO algorithm is applied on multiple standard benchmark problems and compared against the well-known NSGA-II and SPEA2 multiobjective optimization methods with promising results. Index Terms—multiobjective optimization, Kriging, expected improvement, probability of improvement