Results 1  10
of
19
Do additional objectives make a problem harder?
, 2007
"... In this paper, we examine how adding objectives to a given optimization problem affects the computation effort required to generate the set of Paretooptimal solutions. Experimental studies show that additional objectives may change the runtime behavior of an algorithm drastically. Often it is assum ..."
Abstract

Cited by 20 (11 self)
 Add to MetaCart
In this paper, we examine how adding objectives to a given optimization problem affects the computation effort required to generate the set of Paretooptimal solutions. Experimental studies show that additional objectives may change the runtime behavior of an algorithm drastically. Often it is assumed that more objectives make a problem harder as the number of different tradeoffs may increase with the problem dimension. We show that additional objectives, however, may be both beneficial and obstructive depending on the chosen objective. Our results are obtained by rigorous runtime analyses that show the different effects of adding objectives to a wellknown plateaufunction.
On the effects of adding objectives to plateau functions
 IEEE Transactions on Evolutionary Computation
, 2009
"... AbstractIn this paper, we examine how adding objectives to a given optimization problem affects the computational effort required to generate the set of Paretooptimal solutions. Experimental studies show that additional objectives may change the running time behavior of an algorithm drastically. ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
AbstractIn this paper, we examine how adding objectives to a given optimization problem affects the computational effort required to generate the set of Paretooptimal solutions. Experimental studies show that additional objectives may change the running time behavior of an algorithm drastically. Often it is assumed that more objectives make a problem harder as the number of different tradeoffs may increase with the problem dimension. We show that additional objectives, however, may be both beneficial and obstructive depending on the chosen objective. Our results are obtained by rigorous running time analyses that show the different effects of adding objectives to a wellknown plateau function. Additional experiments show that the theoretically shown behavior can be observed for problems with more than one objective. Index TermsMultiobjective optimization, running time analysis, theory. I. MOTIVATION I N RECENT YEARS, the number of publications on evolutionary multiobjective optimization has been rapidly growing; however, most of the studies investigate problems where the number of considered objectives is low, i.e., between two and four, while studies with many objectives are rare There is some evidence in the literature that additional objectives can make a problem harder. This discussion indicates that a general statement on the effect of increasing the number of objectives is not possible. For some problems, with a higher number of objectives it is more difficult to generate the Paretooptimal front; for other problems, it is easier. However, given the previous work, the question arises whether one and the same problem can be made both easier and harder depending on the added objective. This paper answers this question both experimentally and 10518215/$25.00
Objective reduction in evolutionary multiobjective optimization: Theory and applications
 EVOLUTIONARY COMPUTATION
, 2009
"... Manyobjective problems represent a major challenge in the field of evolutionary multiobjective optimization—in terms of search efficiency, computational cost, decision making, visualization, and so on. This leads to various research questions, in particular whether certain objectives can be omitte ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Manyobjective problems represent a major challenge in the field of evolutionary multiobjective optimization—in terms of search efficiency, computational cost, decision making, visualization, and so on. This leads to various research questions, in particular whether certain objectives can be omitted in order to overcome or at least diminish the difficulties that arise when many, that is, more than three, objective functions are involved. This study addresses this question from different perspectives. First, we investigate how adding or omitting objectives affects the problem characteristics and propose a general notion of conflict between objective sets as a theoretical foundation for objective reduction. Second, we present both exact and heuristic algorithms to systematically reduce the number of objectives, while preserving as much as possible of the dominance structure of the underlying optimization problem. Third, we demonstrate the usefulness of the proposed objective reduction method in the context of both decision making and search for a radar waveform application as well as for wellknown test functions.
Analyzing Hypervolume Indicator Based Algorithms
"... Abstract Indicatorbased methods to tackle multiobjective problems have become popular recently, mainly because they allow to incorporate user preferences into the search explicitely. Multiobjective Evolutionary Algorithms (MOEAs) using the hypervolume indicator in particular showed better performan ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Abstract Indicatorbased methods to tackle multiobjective problems have become popular recently, mainly because they allow to incorporate user preferences into the search explicitely. Multiobjective Evolutionary Algorithms (MOEAs) using the hypervolume indicator in particular showed better performance than classical MOEAs in experimental comparisons. In this paper, the use of indicatorbased MOEAs is investigated for the first time from a theoretical point of view. We carry out running time analyses for an evolutionary algorithm with a (µ + 1)selection scheme based on the hypervolume indicator as it is used in most of the recently proposed MOEAs. Our analyses point out two important aspects of the search process. First, we examine how such algorithms can approach the Pareto front. Later on, we point out how they can achieve a good approximation for an exponentially large Pareto front. 1
Enhancing Diversity for Average Ranking Method in Evolutionary ManyObjective Optimization
"... Abstract. The average ranking (AR) method has been shown highly effective to provide sufficient selection pressure searching towards Pareto optimal set in manyobjective optimization. However, as lack of diversity maintenance mechanism, the obtained final set may only concentrate in a subregion of P ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract. The average ranking (AR) method has been shown highly effective to provide sufficient selection pressure searching towards Pareto optimal set in manyobjective optimization. However, as lack of diversity maintenance mechanism, the obtained final set may only concentrate in a subregion of Pareto front. In this paper, we propose a diversity maintenance strategy for AR to balance convergence and diversity during evolution process. We employ grid to define an adaptive neighborhood for each individual, whose size varies with the number of objectives. Moreover, a layering selection scheme integrates it and AR to pick out wellconverged individuals and prohibit or postpone the archive of adjacent individuals. From an extensive comparative study with original AR and two other diversity maintenance methods, the proposed method shows a good balance among convergence, uniformity and spread.
Robustness in hypervolumebased multiobjective search
, 2010
"... The use of quality indicators within the search has become a popular approach in the field of evolutionary multiobjective optimization. It relies on the concept to transform the original multiobjective problem into a set problem that involves a single objective function only, namely a quality indica ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
The use of quality indicators within the search has become a popular approach in the field of evolutionary multiobjective optimization. It relies on the concept to transform the original multiobjective problem into a set problem that involves a single objective function only, namely a quality indicator, reflecting the quality of a Pareto set approximation. Especially the hypervolume indicator has gained a lot of attention in this context since it is the only set quality measure known that guarantees strict monotonicity. Accordingly, various hypervolumebased search algorithms for approximating the Pareto set have been proposed, including samplingbased methods that circumvent the problem that the hypervolume is in general hard to compute. Despite these advances, there are several open research issues in indicatorbased multiobjective search when considering realworld applications—the issue of robustness is one of them. For instance with mechanical manufacturing processes, there exist unavoidable inaccuracies that prevent a desired solution to be realized with perfect precision; therefore, a solution in terms of a concrete decision vector is not associated
ETEA: A Euclidean Minimum Spanning TreeBased Evolutionary Algorithm for MultiObjective Optimization
"... Evolutionary Computation corrected proof doi:10.1162/EVCO_a_00106 ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Evolutionary Computation corrected proof doi:10.1162/EVCO_a_00106
A Fast Manyobjective Hypervolume Algorithm using Iterated Incremental Calculations
"... [4] ) is a popular metric for comparing the performance of multiobjective evo lutionary algorithms (MOEAs). The hypervolume of a set of solutions measures the size of the portion of objective space that is dominated by those solutions collectively. Hypervol ume captures in one scalar both the clos ..."
Abstract
 Add to MetaCart
(Show Context)
[4] ) is a popular metric for comparing the performance of multiobjective evo lutionary algorithms (MOEAs). The hypervolume of a set of solutions measures the size of the portion of objective space that is dominated by those solutions collectively. Hypervol ume captures in one scalar both the closeness of the solutions to the optimal set and the spread of the solutions across objective space. Hypervolume also has nicer mathematical properties than other metrics: it was the first unary metric that detects when a set of solutions X is not worse than another set X' Three fast algorithms have been proposed for calcu lating hypervolume exactly. The Hypervolume by Slic ing Objectives algorithm (HSO) [10][12] processes the objectives in a front, rather than the points. HSO di vides the nDhypervolume to be measured into separate (n 1 )Dslices through the values in one of the objectives, then it calculates the hypervolume of each slice and sums these values to derive the total. HSO's worstcase complexity is O(mnl ) The authors are with the School of Computer Science & Software Engineering, The University of Western Australia, Western Australia 6009, Australia (email: lucas@csse.uwa.edu.au;lyndon@csse.uwa.edu.au; luigi@csse.uwa.edu.au). 9781424481262/10/$26.00 ©201 0 IEEE for reordering objectives In addition, algorithms from the computational geometry field have recently been applied to hypervolume calcula tion. Beume and Rudolph adapt the Overmars and Yap algorithm Paquete et al. [16] use a geometryinspired algorithm to calculate the maxima of a point set in 3D which has shown to be optimal by Beume et al. Another recent development is the Incremental HSO algo rithm [19] (IHSO). This is an adaptation of HSO to calculate the exclusive hypervolume contribution of a point to a front. IHSO is especially useful where hypervolume is used inline within a MOEA, either for diversity calculations [20], or for archiving purposes [21], or in selection [22], [23]. However, as we will demonstrate, IHSO can also be applied iteratively to create a new method for hypervolume metric calculations. This paper makes four principal contributions. • We describe a new algorithm IIHSO (Iterated IHSO) for calculating hypervolume exactly. IIHSO applies IHSO iteratively, starting with an empty set and adding one point at a time until the entire front has been processed. The idea of calculating hypervolume as a summation of exclusive hypervolumes was introduced by LebMea sure • We describe heuristics designed to optimise the typical performance of IIHSO, mainly for choosing a good order for adding the points to the set and a good order for processing the objectives. • We show that while HOY has by far the best worstcase
Scalable Product Line Configuration: A Straw to Break the Camel’s Back
"... Abstract—Software product lines are hard to configure. Techniques that work for medium sized product lines fail for much larger product lines such as the Linux kernel with 6000+ features. This paper presents simple heuristics that help the IndicatorBased Evolutionary Algorithm (IBEA) in finding sou ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—Software product lines are hard to configure. Techniques that work for medium sized product lines fail for much larger product lines such as the Linux kernel with 6000+ features. This paper presents simple heuristics that help the IndicatorBased Evolutionary Algorithm (IBEA) in finding sound and optimum configurations of very large variability models in the presence of competing objectives. We employ a combination of static and evolutionary learning of model structure, in addition to utilizing a precomputed solution used as a “seed ” in the midst of a randomlygenerated initial population. The seed solution works like a single straw that is enough to break the camel’s back –given that it is a featurerich seed. We show promising results where we can find 30 sound solutions for configuring upward of 6000 features within 30 minutes. Index Terms—Variability models, automated configuration, multiobjective optimization, evolutionary algorithms, SMT solvers.
Optimum Feature Selection in Software Product Lines: Let Your Model and Values Guide Your Search
"... metaheuristic search algorithms are utilized to find solutions to common software engineering problems. The algorithms are usually taken “off the shelf ” and applied with trust, i.e. software engineers are not concerned with the inner workings of algorithms, only with the results. While this may be ..."
Abstract
 Add to MetaCart
(Show Context)
metaheuristic search algorithms are utilized to find solutions to common software engineering problems. The algorithms are usually taken “off the shelf ” and applied with trust, i.e. software engineers are not concerned with the inner workings of algorithms, only with the results. While this may be sufficient is some domains, we argue against this approach, particularly where the complexity of the models and the variety of user preferences pose greater challenges to the metaheuristic search algorithms. We build on our previous investigation which uncovered the power of IndicatorBased Evolutionary Algorithm (IBEA) over traditionallyused algorithms (such as NSGAII), and in this work we scrutinize the time behavior of user objectives subject to optimization. This analysis brings out the business perspective, previously veiled under Paretocollective gauges such as Hypervolume and Spread. In addition, we show how slowing down the rates of crossover and mutation can help IBEA converge faster, as opposed to following the higher rates used in many other studies as “rules of thumb”.