Results 1 - 10
of
23
Worst case complexity of direct search
, 2010
"... In this paper we prove that direct search of directional type shares the worst case complexity bound of steepest descent when sufficient decrease is imposed using a quadratic function of the step size parameter. This result is proved under smoothness of the objective function and using a framework o ..."
Abstract
-
Cited by 33 (4 self)
- Add to MetaCart
(Show Context)
In this paper we prove that direct search of directional type shares the worst case complexity bound of steepest descent when sufficient decrease is imposed using a quadratic function of the step size parameter. This result is proved under smoothness of the objective function and using a framework of the type of GSS (generating set search). We also discuss the worst case complexity of direct search when only simple decrease is imposed and when the objective function is nonsmooth.
Direct Multisearch for Multiobjective Optimization
, 2010
"... In practical applications of optimization it is common to have several conflicting objective functions to optimize. Frequently, these functions are subject to noise or can be of black-box type, preventing the use of derivative-based techniques. We propose a novel multiobjective derivative-free metho ..."
Abstract
-
Cited by 13 (0 self)
- Add to MetaCart
In practical applications of optimization it is common to have several conflicting objective functions to optimize. Frequently, these functions are subject to noise or can be of black-box type, preventing the use of derivative-based techniques. We propose a novel multiobjective derivative-free methodology, calling it direct multi-search (DMS), which does not aggregate any of the objective functions. Our framework is inspired by the search/poll paradigm of direct-search methods of directional type and uses the concept of Pareto dominance to maintain a list of nondominated points (from which the new iterates or poll centers are chosen). The aim of our method is to generate as many points in the Pareto front as possible from the polling procedure itself, while keeping the whole framework general enough to accommodate other disseminating strategies, in particular when using the (here also) optional search step. DMS generalizes to multiobjective optimization (MOO) all direct-search methods of directional type. We prove under the common assumptions used in direct search for single optimization that at least one limit point of the sequence of iterates generated by DMS lies in (a stationary
Smoothing and Worst-Case Complexity for Direct-Search Methods in Nonsmooth Optimization
, 2012
"... In the context of the derivative-free optimization of a smooth objective function, it has been shown that the worst case complexity of direct-search methods is of the same order as the one of steepest descent for derivative-based optimization, more precisely that the number of iterations needed to r ..."
Abstract
-
Cited by 8 (2 self)
- Add to MetaCart
In the context of the derivative-free optimization of a smooth objective function, it has been shown that the worst case complexity of direct-search methods is of the same order as the one of steepest descent for derivative-based optimization, more precisely that the number of iterations needed to reduce the norm of the gradient of the objective function below a certain threshold is proportional to the inverse of the threshold squared. Motivated by the lack of such a result in the non-smooth case, we propose, analyze, and test a class of smoothing direct-search methods for the unconstrained optimization of nonsmooth functions. Given a parameterized family of smoothing functions for the non-smooth objective function dependent on a smoothing parameter, this class of methods consists of applying a direct-search algorithm for a fixed value of the smoothing parameter until the step size is relatively small, after which the smoothing parameter is reduced and the process is repeated. One can show that the worst case complexity (or cost) of this procedure is roughly one order of magnitude worse than the one for direct search or steepest descent on smooth functions. The class of smoothing direct-search methods is also showed to enjoy asymptotic global convergence properties. Some preliminary numerical experiments indicates that this approach leads to better values of the objective function, pushing in some cases the optimization further, apparently without an additional cost in the number of function evaluations.
Convergence of trust-region methods based on probabilistic models
, 2013
"... In this paper we consider the use of probabilistic or random models within a classical trustregion framework for optimization of deterministic smooth general nonlinear functions. Our method and setting differs from many stochastic optimization approaches in two principal ways. Firstly, we assume tha ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
In this paper we consider the use of probabilistic or random models within a classical trustregion framework for optimization of deterministic smooth general nonlinear functions. Our method and setting differs from many stochastic optimization approaches in two principal ways. Firstly, we assume that the value of the function itself can be computed without noise, in other words, that the function is deterministic. Secondly, we use random models of higher quality than those produced by usual stochastic gradient methods. In particular, a first order model based on random approximation of the gradient is required to provide sufficient quality of approximation with probability greater than or equal to 1/2. This is in contrast with stochastic gradient approaches, where the model is assumed to be “correct” only in expectation. As a result of this particular setting, we are able to prove convergence, with probability one, of a trust-region method which is almost identical to the classical method. Moreover, the new method is simpler than its deterministic counterpart as it does not require a criticality step. Hence we show that a standard optimization framework can be used in cases when
Globally convergent evolution strategies and CMA-ES
, 2012
"... Abstract: In this paper we show how to modify a large class of evolution strategies (ES) to rigorously achieve a form of global convergence, meaning convergence to stationary points independently of the starting point. The type of ES under consideration recombine the parents by means of a weighted ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
Abstract: In this paper we show how to modify a large class of evolution strategies (ES) to rigorously achieve a form of global convergence, meaning convergence to stationary points independently of the starting point. The type of ES under consideration recombine the parents by means of a weighted sum, around which the offsprings are computed by random generation. One relevant instance of such ES is CMA-ES. The modifications consist essentially of the reduction of the size of the steps whenever a sufficient decrease condition on the function values is not verified. When such a condition is satisfied, the step size can be reset to the step size maintained by the ES themselves, as long as this latter one is sufficiently large. We suggest a number of ways of imposing sufficient decrease for which global convergence holds under reasonable assumptions, and extend our theory to the constrained case. Given a limited budget of function evaluations, our numerical experiments have shown that the modified CMA-ES is capable of further progress in function values. Moreover, we have observed that such an improvement in efficiency comes without deteriorating the behavior of the underlying method in the presence of nonconvexity.
A LINESEARCH-BASED DERIVATIVE-FREE APPROACH FOR NONSMOOTH CONSTRAINED OPTIMIZATION∗
"... Abstract. In this paper, we propose new linesearch-based methods for nonsmooth constrained optimization problems when first-order information on the problem functions is not available. In the first part, we describe a general framework for bound-constrained problems and analyze its convergence towar ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
(Show Context)
Abstract. In this paper, we propose new linesearch-based methods for nonsmooth constrained optimization problems when first-order information on the problem functions is not available. In the first part, we describe a general framework for bound-constrained problems and analyze its convergence toward stationary points, using the Clarke–Jahn directional derivative. In the second part, we consider inequality constrained optimization problems where both objective function and constraints can possibly be nonsmooth. In this case, we first split the constraints into two subsets: difficult general nonlinear constraints and simple bound constraints on the variables. Then, we use an exact penalty function to tackle the difficult constraints and we prove that the original problem can be reformulated as the bound-constrained minimization of the proposed exact penalty function. Finally, we use the framework developed for the bound-constrained case to solve the penalized problem. Moreover, we prove that every accumulation point, under standard assumptions on the search directions, of the generated sequence of iterates is a stationary point of the original constrained problem. In the last part of the paper, we report extended numerical results on both bound-constrained and nonlinearly constrained problems, showing that our approach is promising when compared to some state-of-the-art codes from the literature.
Tribes, C.: Reducing the number of function evaluations in mesh adaptive direct search algorithms
- SIAM J. Optim
, 2014
"... Abstract: The Mesh Adaptive Direct Search (MADS) class of algorithms is designed for nonsmooth optimization, where the objective function and constraints are typically computed by launching a time-consuming computer simulation. Each iteration of a MADS algorithm attempts to improve the current best ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
Abstract: The Mesh Adaptive Direct Search (MADS) class of algorithms is designed for nonsmooth optimization, where the objective function and constraints are typically computed by launching a time-consuming computer simulation. Each iteration of a MADS algorithm attempts to improve the current best-known solution by launching the simulation at a finite number of trial points. Common implementations of MADS generate 2n trial points at each iteration, where n is the number of variables in the optimization problem. The objective of the present work is to reduce that number. We present an algorithmic framework that reduces the number of simulations to exactly n + 1, without impacting the theoretical guarantees from the convergence analysis. Numerical experiments are conducted for several different contexts; the results suggest that these strategies allow the new algorithms to reach a better solution with fewer function evaluations.
Constrained Derivative-Free Optimization on Thin Domains
, 2011
"... Many derivative-free methods for constrained problems are not efficient for minimizing functions on “thin” domains. Other algorithms, like those based on Augmented Lagrangians, deal with thin constraints using penalty-like strategies. When the constraints are computationally inexpensive but highly n ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
Many derivative-free methods for constrained problems are not efficient for minimizing functions on “thin” domains. Other algorithms, like those based on Augmented Lagrangians, deal with thin constraints using penalty-like strategies. When the constraints are computationally inexpensive but highly nonlinear, these methods spend many potentially expensive objective function evaluations motivated by the difficulties of improving feasibility. An algorithm that handles efficiently this case is proposed in this paper. The main iteration is splitted into two steps: restoration and minimization. In the restoration step the aim is to decrease infeasibility without evaluating the objective function. In the minimization step the objective function f is minimized on a relaxed feasible set. A global minimization result will be proved and computational experiments showing the advantages of this approach will be presented.
Globally convergent evolution strategies for constrained optimization
- Comput. Optim. Appl
, 2015
"... Abstract In this paper we propose, analyze, and test algorithms for linearly constrained optimization when no use of derivatives of the objective function is made. The proposed methodology is built upon the globally convergent evolution strategies previously introduced by the authors for unconstrai ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Abstract In this paper we propose, analyze, and test algorithms for linearly constrained optimization when no use of derivatives of the objective function is made. The proposed methodology is built upon the globally convergent evolution strategies previously introduced by the authors for unconstrained optimization. Two approaches are encompassed to handle the constraints. In a first approach, feasibility is first enforced by a barrier function and the objective function is then evaluated directly at the feasible generated points. A second approach projects first all the generated points onto the feasible domain before evaluating the objective function. The resulting algorithms enjoy favorable global convergence properties (convergence to stationarity from arbitrary starting points), regardless of the linearity of the constraints. The algorithmic implementation (i) includes a step where previously evaluated points are used to accelerate the search (by minimizing quadratic models) and (ii) addresses general linearly constrained optimization. Our solver is compared to others, and the numerical results confirm its competitiveness in terms of efficiency and robustness.
DOI 10.1007/s10589-015-9753-5 Mesh adaptive direct search with second directional derivative-based Hessian update
, 2014
"... Abstract The subject of this paper is inequality constrained black-box optimization with mesh adaptive direct search (MADS). The MADS search step can include addi-tional strategies for accelerating the convergence and improving the accuracy of the solution. The strategy proposed in this paper involv ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract The subject of this paper is inequality constrained black-box optimization with mesh adaptive direct search (MADS). The MADS search step can include addi-tional strategies for accelerating the convergence and improving the accuracy of the solution. The strategy proposed in this paper involves building a quadratic model of the function and linear models of the constraints. The quadratic model is built by means of a second directional derivative-based Hessian update. The linear terms are obtained by linear regression. The resulting quadratic programming (QP) problem is solved with a dedicated solver and the original functions are evaluated at the QP solution. The proposed search strategy is computationally less expensive than the quadratically constrained QP strategy in the state of the art MADS implementation (NOMAD). The proposed MADS variant (QPMADS) and NOMAD are compared on four sets of test problems. QPMADS outperforms NOMAD on all four of them for all but the smallest computational budgets.