Results 1  10
of
14
ApproximationGuided Evolutionary MultiObjective Optimization
 PROCEEDINGS OF THE TWENTYSECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
, 2011
"... Multiobjective optimization problems arise frequently in applications but can often only be solved approximately by heuristic approaches. Evolutionary algorithms have been widely used to tackle multiobjective problems. These algorithms use different measures to ensure diversity in the objective sp ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
(Show Context)
Multiobjective optimization problems arise frequently in applications but can often only be solved approximately by heuristic approaches. Evolutionary algorithms have been widely used to tackle multiobjective problems. These algorithms use different measures to ensure diversity in the objective space but are not guided by a formal notion of approximation. We present a new framework of an evolutionary algorithm for multiobjective optimization that allows to work with a formal notion of approximation. Our experimental results show that our approach outperforms stateoftheart evolutionary algorithms in terms of the quality of the approximation that is obtained in particular for problems with many objectives.
Hypervolumebased Multiobjective Optimization: Theoretical Foundations and Practical Implications
 THEORETICAL COMPUTER SCIENCE
, 2011
"... In recent years, indicatorbased evolutionary algorithms, allowing to implicitly incorporate user preferences into the search, have become widely used in practice to solve multiobjective optimization problems. When using this type of methods, the optimization goal changes from optimizing a set of ob ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
In recent years, indicatorbased evolutionary algorithms, allowing to implicitly incorporate user preferences into the search, have become widely used in practice to solve multiobjective optimization problems. When using this type of methods, the optimization goal changes from optimizing a set of objective functions simultaneously to the singleobjective optimization goal of finding a set of µ points that maximizes the underlying indicator. Understanding the difference between these two optimization goals is fundamental when applying indicatorbased algorithms in practice. On the one hand, a characterization of the inherent optimization goal of different indicators allows the user to choose the indicator that meets her preferences. On the other hand, knowledge about the sets of µ points with optimal indicator values—socalled optimal µdistributions—can be used in performance assessment whenever the indicator is used as a performance criterion. However, theoretical studies on indicatorbased optimization are sparse. One of the most popular indicators is the weighted hypervolume indicator. It allows to guide the search towards userdefined objective space regions and at the same time has the property of being a refinement of the Pareto dominance relation with the result that maximizing the indicator results in Paretooptimal solutions only. In previous work, we theoretically investigated the unweighted hypervolume indicator in terms of a characterization of optimal µdistributions and the influence of the hypervolume’s reference point for general biobjective optimization problems. In this
On sequential online archiving of objective vectors
 In EvolutionaryMulticriterion Optimization (EMO 2011
, 2011
"... Abstract. In this paper, we examine the problem of maintaining an approximation of the set of nondominated points visited during a multiobjective optimization, a problem commonly known as archiving. Most of the currently available archiving algorithms are reviewed, and what is known about their con ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we examine the problem of maintaining an approximation of the set of nondominated points visited during a multiobjective optimization, a problem commonly known as archiving. Most of the currently available archiving algorithms are reviewed, and what is known about their convergence and approximation properties is summarized. The main scenario considered is the restricted case where the archive must be updated online as points are generated one by one, and at most a fixed number of points are to be stored in the archive at any one time. In this scenario, the monotonicity of an archiving algorithm is proposed as a weaker, but more practical, property than negative efficiency preservation. This paper shows that hypervolumebased archivers and a recently proposed multilevel grid archiver have this property. On the other hand, the archiving methods used by SPEA2 and NSGAII do not, and they may deteriorate with time. Themonotonicity property has meaning on any input sequence of points. We also classify archivers according to limit properties, i.e. convergence and approximation properties of the archiver in the limit of infinite (input) samples from a finite space with strictly positive generation probabilities for all points. This paper establishes a number of research questions, and provides the initial framework and analysis for answering them.
Tight bounds for the approximation ratio of the hypervolume indicator
 In Proc. 11th International Conference 29 Problem Solving from Nature (PPSN XI), volume 6238 of LNCS
, 2010
"... Abstract The hypervolume indicator is widely used to guide the search and to evaluate the performance of evolutionary multiobjective optimization algorithms. It measures the volume of the dominated portion of the objective space which is considered to give a good approximation of the Pareto front. ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
(Show Context)
Abstract The hypervolume indicator is widely used to guide the search and to evaluate the performance of evolutionary multiobjective optimization algorithms. It measures the volume of the dominated portion of the objective space which is considered to give a good approximation of the Pareto front. There is surprisingly little theoretically known about the quality of this approximation. We examine the multiplicative approximation ratio achieved by twodimensional sets maximizing the hypervolume indicator and prove that it deviates significantly from the optimal approximation ratio. This provable gap is even exponential in the ratio between the largest and the smallest value of the front. We also examine the additive approximation ratio of the hypervolume indicator and prove that it achieves the optimal additive approximation ratio apart from a small factor � n/(n − 2), where n is the size of the population. Hence the hypervolume indicator can be used to achieve a very good additive but not a good multiplicative approximation of a Pareto front. 1
Approximation Quality of the Hypervolume Indicator
, 2012
"... In order to allow a comparison of (otherwise incomparable) sets, many evolutionary multiobjective optimizers use indicator functions to guide the search and to evaluate the performance of search algorithms. The most widely used indicator is the hypervolume indicator. It measures the volume of the do ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
In order to allow a comparison of (otherwise incomparable) sets, many evolutionary multiobjective optimizers use indicator functions to guide the search and to evaluate the performance of search algorithms. The most widely used indicator is the hypervolume indicator. It measures the volume of the dominated portion of the objective space bounded from below by a reference point. Though the hypervolume indicator is very popular, it has not been shown that maximizing the hypervolume indicator of sets of bounded size is indeed equivalent to the overall objective of finding a good approximation of the Pareto front. To address this question, we compare the optimal approximation ratio with the approximation ratio achieved by twodimensional sets maximizing the hypervolume indicator. We bound the optimal multiplicative approximation ratio of n points by 1+Θ(1/n) for arbitrary Pareto fronts. Furthermore, we prove that the same asymptotic approximation ratio is achieved by sets of n points that maximize the hypervolume indicator. However, there is a provable gap between the two approximation ratios which is even exponential in the ratio between the largest and the smallest value of the front. We also examine the additive approximation ratio of the hypervolume indicator in two dimensions and prove that it achieves the optimal additive approximation ratio apart from a small ratio � n/(n−2), where n is the size of the population. Hence the hypervolume indicator can be used to achieve a good additive but not a good multiplicative approximation of a Pareto front. This motivates the introduction of a “logarithmic hypervolume indicator ” which provably achieves a good multiplicative approximation ratio.
Optimal µDistributions for the Hypervolume Indicator for Problems With Linear BiObjective Fronts: Exact and Exhaustive Results
 SIMULATED EVOLUTION AND LEARNING (SEAL2010), DEC 2010, KANPUR, INDIA. 2010
, 2010
"... ..."
(Show Context)
Succinct Sampling from Discrete Distributions
"... We revisit the classic problem of sampling from a discrete distribution: Given n nonnegative wbit integers x1,..., xn, the task is to build a data structure that allows sampling i with probability proportional to xi. The classic solution is Walker’s alias method that takes, when implemented on a W ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We revisit the classic problem of sampling from a discrete distribution: Given n nonnegative wbit integers x1,..., xn, the task is to build a data structure that allows sampling i with probability proportional to xi. The classic solution is Walker’s alias method that takes, when implemented on a Word RAM, O(n) preprocessing time, O(1) expected query time for one sample, and n(w+2 lg n+o(1)) bits of space. Using the terminology of succinct data structures, this solution has redundancy 2n lg n + o(n) bits, i.e., it uses 2n lg n + o(n) bits in addition to the information theoretic minimum required for storing the input. In this paper, we study whether this space usage can be improved. In the systematic case, in which the input is readonly, we present a novel data structure using r + O(w) redundant
Theoretically Investigating Optimal µDistributions for the Hypervolume Indicator: First Results For Three Objectives
, 2010
"... ..."
(Show Context)
Convergence of SetBased MultiObjective Optimization, Indicators, and Deteriorative Cycles
"... Multiobjective optimization deals with the task of computing a set of solutions that represents possible tradeoffs with respect to a given set of objective functions. Setbased approaches such as evolutionary algorithms are very popular for solving multiobjective optimization problems. Convergenc ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Multiobjective optimization deals with the task of computing a set of solutions that represents possible tradeoffs with respect to a given set of objective functions. Setbased approaches such as evolutionary algorithms are very popular for solving multiobjective optimization problems. Convergence of setbased approaches for multiobjective optimization is essential for their success. We take an ordertheoretic view on the convergence of setbased multiobjective optimization and examine how the use of indicator functions can help to direct the search towards Pareto optimal sets. In doing so, we point out that setbased multiobjective optimization working on the dominance relation of search points has to deal with a cyclic behavior that may lead to worsening with respect to the Paretodominance relation defined on sets. Later on, we show in which situations wellknown binary and unary indicators can help to avoid this cyclic behavior and therefore guarantee convergence of the algorithm. We also study the impact of deteriorative cycles on the runtime behavior and give an example in which they provably slow down the optimization process.
Faster Computation of Expected Hypervolume Improvement
"... emmerich AT liacs.nl The expected improvement algorithm (or efficient global optimization) aims for global continuous optimization with a limited budget of blackbox function evaluations. It is based on a statistical model of the function learned from previous evaluations and an infill criterion t ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
emmerich AT liacs.nl The expected improvement algorithm (or efficient global optimization) aims for global continuous optimization with a limited budget of blackbox function evaluations. It is based on a statistical model of the function learned from previous evaluations and an infill criterion the expected improvement used to find a promising point for a new evaluation. The ‘expected improvement ’ infill criterion takes into account the mean and variance of a predictive multivariate Gaussian distribution. The expected improvement algorithm has recently been generalized to multiobjective optimization. In order to measure the improvement of a Pareto front quantitatively the gain in dominated (hyper)volume is used. The computation of the expected hypervolume improvement (EHVI) is a multidimensional integration of a stepwise defined nonlinear function related to the Gaussian probability density function over an intersection of boxes. This paper provides a new algorithm for the exact computation of the expected improvement to more than two objective functions. For the bicriteria case it has a time complexity in O(n2) with n denoting the number of points in the current best Pareto front approximation. It improves previously known algorithms with time complexity O(n3 logn). For tricriteria optimization we devise an algorithm with time complexity of O(n3). Besides discussing the new time complexity bounds the speed of the new algorithm is also tested empirically on test data. It is shown that further improvements in speed can be achieved by reusing data structures built up in previous iterations. The resulting numerical algorithms can be readily used in existing implementations of hypervolumebased expected improvement algorithms.