• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

Convergence of mesh adaptive direct search to second-order stationary points (0)

by M A Abramson, C Audet
Venue:SIAM J. Optim
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 20
Next 10 →

Derivative-free optimization: A review of algorithms and comparison of software implementations

by Luis Miguel Rios, Nikolaos V. Sahinidis
"... ..."
Abstract - Cited by 32 (0 self) - Add to MetaCart
Abstract not found

Analysis of Direct Searches for Discontinuous Functions

by L. N. Vicente, et al. , 2010
"... It is known that the Clarke generalized directional derivative is nonnegative along the limit directions generated by directional direct-search methods at a limit point of certain subsequences of unsuccessful iterates, if the function being minimized is Lipschitz continuous near the limit point. In ..."
Abstract - Cited by 23 (4 self) - Add to MetaCart
It is known that the Clarke generalized directional derivative is nonnegative along the limit directions generated by directional direct-search methods at a limit point of certain subsequences of unsuccessful iterates, if the function being minimized is Lipschitz continuous near the limit point. In this paper we generalize this result for discontinuous functions using Rockafellar generalized directional derivatives (upper subderivatives). We show that Rockafellar derivatives are also nonnegative along the limit directions of those subsequences of unsuccessful iterates when the function values converge to the function value at the limit point. This result is obtained assuming that the function is directionally Lipschitz with respect to the limit direction. It is also possible under appropriate conditions to establish more insightful results by showing that the sequence of points generated by these methods eventually approaches the limit point along the locally best branch or step function (when the number of steps is equal to two). The results of this paper are presented for constrained optimization and illustrated numerically.

Smoothing and Worst-Case Complexity for Direct-Search Methods in Nonsmooth Optimization

by R. Garmanjani, et al. , 2012
"... In the context of the derivative-free optimization of a smooth objective function, it has been shown that the worst case complexity of direct-search methods is of the same order as the one of steepest descent for derivative-based optimization, more precisely that the number of iterations needed to r ..."
Abstract - Cited by 8 (2 self) - Add to MetaCart
In the context of the derivative-free optimization of a smooth objective function, it has been shown that the worst case complexity of direct-search methods is of the same order as the one of steepest descent for derivative-based optimization, more precisely that the number of iterations needed to reduce the norm of the gradient of the objective function below a certain threshold is proportional to the inverse of the threshold squared. Motivated by the lack of such a result in the non-smooth case, we propose, analyze, and test a class of smoothing direct-search methods for the unconstrained optimization of nonsmooth functions. Given a parameterized family of smoothing functions for the non-smooth objective function dependent on a smoothing parameter, this class of methods consists of applying a direct-search algorithm for a fixed value of the smoothing parameter until the step size is relatively small, after which the smoothing parameter is reduced and the process is repeated. One can show that the worst case complexity (or cost) of this procedure is roughly one order of magnitude worse than the one for direct search or steepest descent on smooth functions. The class of smoothing direct-search methods is also showed to enjoy asymptotic global convergence properties. Some preliminary numerical experiments indicates that this approach leads to better values of the objective function, pushing in some cases the optimization further, apparently without an additional cost in the number of function evaluations.

DIRECT SEARCH ALGORITHMS OVER RIEMANNIAN MANIFOLDS ∗

by David W. Dreisigmeyer , 2006
"... We generalize the Nelder-Mead simplex and LTMADS algorithms and, the frame based methods for function minimization to Riemannian manifolds. Examples are given for functions defined on the special orthogonal Lie group SO(n) and the Grassmann manifold G(n, k). Our main examples are applying the genera ..."
Abstract - Cited by 8 (3 self) - Add to MetaCart
We generalize the Nelder-Mead simplex and LTMADS algorithms and, the frame based methods for function minimization to Riemannian manifolds. Examples are given for functions defined on the special orthogonal Lie group SO(n) and the Grassmann manifold G(n, k). Our main examples are applying the generalized LTMADS algorithm to equality constrained optimization problems and, to the Whitney embedding problem for dimensionality reduction of data. A convergence analysis of the frame based method is also given.
(Show Context)

Citation Context

...rch step. The poll step will look around our current candidate solution for a better point on our mesh. Unlike the Nelder-Mead algorithm, there are general convergence results for the MADS algorithms =-=[1, 3]-=-. Since the convergence results for the MADS algorithms depend on the poll step, the choices for this step of the algorithms are much more restricted. A particular example of the MADS algorithms, the ...

Radial basis function algorithms for large-scale nonlinearly constrained black-box optimization

by Rommel G. Regis - Presented at the 20th International Symposium on Mathematical Programming (ISMP
"... Abstract. This paper presents a new algorithm for derivative-free optimization of expensive black-box objective functions subject to expensive black-box inequality constraints. The proposed algorithm, called ConstrLMSRBF, uses radial basis function (RBF) surrogate models and is an extension of the L ..."
Abstract - Cited by 6 (1 self) - Add to MetaCart
Abstract. This paper presents a new algorithm for derivative-free optimization of expensive black-box objective functions subject to expensive black-box inequality constraints. The proposed algorithm, called ConstrLMSRBF, uses radial basis function (RBF) surrogate models and is an extension of the Local Metric Stochastic RBF (LMSRBF) algorithm by Regis and Shoemaker (2007a) that can handle black-box inequality constraints. Previous algorithms for the optimization of expensive functions using surrogate models have mostly dealt with bound constrained problems where only the objective function is expensive, and so, the surrogate models are used to approximate the objective function only. In contrast, ConstrLMSRBF builds RBF surrogate models for the objective function and also for all the constraint functions in each iteration, and uses these RBF models to guide the selection of the next point where the objective and constraint functions will be evaluated. Computational results indicate that ConstrLMSRBF is better than alternative methods on 9 out of 14 test problems and on the MOPTA08 problem from the automotive industry (Jones 2008). The MOPTA08 problem has 124 decision variables and 68 inequality constraints and is considered a large-scale problem in the area of expensive black-box optimization. The alternative methods include a Mesh Adaptive Direct Search (MADS) algorithm (Abramson and Audet 2006, Audet and Dennis 2006) that uses a kriging-based surrogate model, the Multistart LMSRBF algorithm by Regis and Shoemaker
(Show Context)

Citation Context

...alternative methods: (1) NOMADm-DACE, which is a Mesh Adaptive Direct Search (MADS) algorithm (Abramson and Audet 2006, Audet and Dennis 2006) that uses a DACE surrogate model (Lophaven et al. 2002); =-=(2)-=- MLMSRBF-Penalty, which is the Multistart LMSRBF algorithm by Regis and Shoemaker (2007a) that has been modified to handle black-box constraints via a penalty approach; (3) a sequential quadratic prog...

Parallel Space Decomposition of the Mesh Adaptive Direct Search Algorithm

by Charles Audet, J. E. Dennis, Jr. , Sébastien Le Digabel , 2007
"... This paper describes a Parallel Space Decomposition (PSD) technique for the Mesh Adaptive Direct Search (MADS) algorithm. MADS extends Generalized Pattern Search for constrained nonsmooth optimization problems. The objective here is to solve larger problems more efficiently. The new method (PSD-MADS ..."
Abstract - Cited by 4 (3 self) - Add to MetaCart
This paper describes a Parallel Space Decomposition (PSD) technique for the Mesh Adaptive Direct Search (MADS) algorithm. MADS extends Generalized Pattern Search for constrained nonsmooth optimization problems. The objective here is to solve larger problems more efficiently. The new method (PSD-MADS) is an asynchronous parallel algorithm in which the processes solve problems over subsets of variables. The convergence analysis based on the Clarke calculus is essentially the same as for the MADS algorithm. A practical implementation is described and some numerical results on problems with up to 500 variables illustrate advantages and limitations of PSD-MADS.
(Show Context)

Citation Context

...∆user 0 > 0, k ← 0 [1] POLL AND SEARCH STEPS SEARCH STEP (by other slaves, opportunistic) ask cache server for xs ∈ M(∆M k ) ⊆ M(∆1 k ) SINGLE-POLL STEP construct and evaluate Pk = {xpoll} ⊆ M(∆1 k ) =-=[2]-=- UPDATES determine type of success of iteration k ∆1 k+1 ← τ ωk∆1 ( ) k cannot be larger than ∆M k+1 xk+1 ← (xs or xpoll or xk) k ← k + 1 GOTO [1] IF no stopping condition is verified Figure 6: Detail...

Spent Potliner Treatment Process Optimization Using a MADS Algorithm

by C. Audet, V. Béchard, J. Chaouki, Charles Audet, Vincent Béchard, Jamal Chaouki, Les Cahiers Du Gerad , 2005
"... Les textes publiés dans la série des rapports de recherche HEC n’engagent que la responsabilité de leurs auteurs. La publication de ces rapports de recherche bénéficie d’une subvention du Fonds québécois de la recherche sur la nature et les technologies. ..."
Abstract - Cited by 3 (2 self) - Add to MetaCart
Les textes publiés dans la série des rapports de recherche HEC n’engagent que la responsabilité de leurs auteurs. La publication de ces rapports de recherche bénéficie d’une subvention du Fonds québécois de la recherche sur la nature et les technologies.
(Show Context)

Citation Context

...izes that of Hooke and Jeeves [23]. MADS is designed to handle nonlinear and “yes-no” constraints. The convergence analysis of MADS ensures necessary optimality conditions of the first [6] and second =-=[4]-=- order under certain assumptions (this is detailled in Section 3.3). 3.1 Features of the MADS algorithm Most of the work done by this algorithm is devoted to plan, execute, and control displacements i...

LM-CMA: an Alternative to L-BFGS for Large Scale Black-box Optimization

by Ilya Loshchilov
"... The limited memory BFGS method (L-BFGS) of Liu and Nocedal (1989) is often con-sidered to be the method of choice for continuous optimization when first- and/or second- order information is available. However, the use of L-BFGS can be compli-cated in a black-box scenario where gradient information i ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
The limited memory BFGS method (L-BFGS) of Liu and Nocedal (1989) is often con-sidered to be the method of choice for continuous optimization when first- and/or second- order information is available. However, the use of L-BFGS can be compli-cated in a black-box scenario where gradient information is not available and therefore should be numerically estimated. The accuracy of this estimation, obtained by finite difference methods, is often problem-dependent that may lead to premature conver-gence of the algorithm. In this paper, we demonstrate an alternative to L-BFGS, the limited memory Covari-anceMatrix Adaptation Evolution Strategy (LM-CMA) proposed by Loshchilov (2014). The LM-CMA is a stochastic derivative-free algorithm for numerical optimization of non-linear, non-convex optimization problems. Inspired by the L-BFGS, the LM-CMA samples candidate solutions according to a covariance matrix reproduced from m di-rection vectors selected during the optimization process. The decomposition of the covariance matrix into Cholesky factors allows to reduce the memory complexity to O(mn), where n is the number of decision variables. The time complexity of sampling one candidate solution is also O(mn), but scales as only about 25 scalar-vector mul-tiplications in practice. The algorithm has an important property of invariance w.r.t. strictly increasing transformations of the objective function, such transformations do not compromise its ability to approach the optimum. The LM-CMA outperforms the original CMA-ES and its large scale versions on non-separable ill-conditioned prob-lems with a factor increasing with problem dimension. Invariance properties of the algorithm do not prevent it from demonstrating a comparable performance to L-BFGS on non-trivial large scale smooth and nonsmooth optimization problems.

Nonasymptotic densities for shape reconstruction. Abstract and Applied Analysis

by Sharif Ibrahim, Kevin Sonnanburg, Thomas J. Asaki, Kevin R. Vixie , 2014
"... In this work we study the problem of reconstructing shapes from simple nonasymptotic densities measured only along shape bound-aries. The particular density we study is also known as the integral area invariant and corresponds to the area of a disk centered on the boundary that is also inside the sh ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
In this work we study the problem of reconstructing shapes from simple nonasymptotic densities measured only along shape bound-aries. The particular density we study is also known as the integral area invariant and corresponds to the area of a disk centered on the boundary that is also inside the shape. It is easy to show unique-ness when these densities are known for all radii in a neighborhood of r = 0, but much less straightforward when we assume we know it for (almost) only one r> 0. We present variations of uniqueness re-sults for reconstruction of polygons and (a dense set of) smooth curves under certain regularity conditions.
(Show Context)

Citation Context

...o solve this problem. Mads class algorithms do not require objective derivative information [2, 3] 28 and converge to second order stationary points under reasonable conditions on nonsmooth functions =-=[1]-=-. We implement our constraint using the extreme barrier method [4] in which the objective value is set to infinity whenever constraints are not satisfied. We utilize the standard implementation with p...

DOI 10.1007/s10589-015-9753-5 Mesh adaptive direct search with second directional derivative-based Hessian update

by Jernej Olenšek, Tadej Tuma, B Árpád Bűrmen , 2014
"... Abstract The subject of this paper is inequality constrained black-box optimization with mesh adaptive direct search (MADS). The MADS search step can include addi-tional strategies for accelerating the convergence and improving the accuracy of the solution. The strategy proposed in this paper involv ..."
Abstract - Add to MetaCart
Abstract The subject of this paper is inequality constrained black-box optimization with mesh adaptive direct search (MADS). The MADS search step can include addi-tional strategies for accelerating the convergence and improving the accuracy of the solution. The strategy proposed in this paper involves building a quadratic model of the function and linear models of the constraints. The quadratic model is built by means of a second directional derivative-based Hessian update. The linear terms are obtained by linear regression. The resulting quadratic programming (QP) problem is solved with a dedicated solver and the original functions are evaluated at the QP solution. The proposed search strategy is computationally less expensive than the quadratically constrained QP strategy in the state of the art MADS implementation (NOMAD). The proposed MADS variant (QPMADS) and NOMAD are compared on four sets of test problems. QPMADS outperforms NOMAD on all four of them for all but the smallest computational budgets.
(Show Context)

Citation Context

...ptimization where points outside bounds cannot be evaluated due to the inherent limitations of the underlying simulator. MADS relies on two basic ingredients to deliver certain convergence properties =-=[1,6,27]-=- for nonsmooth functions in the Clarke sense [10] and in the Rockafellar sense [23]. The first one is the restriction of the visited points to a discrete set (mesh). This restriction guarantees the ex...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University