Results 1  10
of
18
Benchmarking derivativefree optimization algorithms
"... We propose data profiles as a tool for analyzing the performance of derivativefree optimization solvers when there are constraints on the computational budget. We use performance and data profiles, together with a convergence test that measures the decrease in function value, to analyze the performa ..."
Abstract

Cited by 73 (6 self)
 Add to MetaCart
(Show Context)
We propose data profiles as a tool for analyzing the performance of derivativefree optimization solvers when there are constraints on the computational budget. We use performance and data profiles, together with a convergence test that measures the decrease in function value, to analyze the performance of three solvers on sets of smooth, noisy, and piecewisesmooth problems. Our results provide estimates for the performance difference between these solvers, and show that on these problems, the modelbased solver tested performs better than the two direct search solvers tested, even for noisy and piecewisesmooth problems. 1
Derivativefree optimization: A review of algorithms and comparison of software implementations
"... ..."
ORBIT: Optimization by radial basis function interpolation in trustregions
 SIAM Journal on Scientific Computing
, 2008
"... Abstract. We present a new derivativefree algorithm, ORBIT, for unconstrained local optimization of computationally expensive functions. A trustregion framework using interpolating Radial Basis Function (RBF) models is employed. The RBF models considered often allow ORBIT to interpolate nonlinear ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We present a new derivativefree algorithm, ORBIT, for unconstrained local optimization of computationally expensive functions. A trustregion framework using interpolating Radial Basis Function (RBF) models is employed. The RBF models considered often allow ORBIT to interpolate nonlinear functions using fewer function evaluations than the polynomial models considered by present techniques. Approximation guarantees are obtained by ensuring that a subset of the interpolation points are sufficiently poised for linear interpolation. The RBF property of conditional positive definiteness yields a natural method for adding additional points. We present numerical results on test problems to motivate the use of ORBIT when only a relatively small number of expensive function evaluations are available. Results on two very different application problems, calibration of a watershed model and optimization of a PDEbased bioremediation plan, are also very encouraging and support ORBIT’s effectiveness on blackbox functions for which no special mathematical structure is known or available.
Bayesian Guided Pattern Search for Robust Local Optimization
, 2008
"... Optimization for complex systems in engineering often involves the use of expensive computer simulation. By combining statistical emulation using treed Gaussian processes with pattern search optimization, we are able to perform robust local optimization more efficiently and effectively than using ei ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
Optimization for complex systems in engineering often involves the use of expensive computer simulation. By combining statistical emulation using treed Gaussian processes with pattern search optimization, we are able to perform robust local optimization more efficiently and effectively than using either method alone. Our approach is based on the augmentation of local search patterns with location sets generated through improvement prediction over the input space. We further develop a computational framework for asynchronous parallel implementation of the optimization algorithm. We demonstrate our methods on two standard test problems and our motivating example of calibrating a circuit device simulator. KEY WORDS: robust local optimization; improvement statistics; response surface methodology; treed Gaussian processes.
The Optimization Test Environment
"... Testing is a crucial part of software development in general, and hence also in mathematical programming. Unfortunately, it is often a time consuming and little exciting activity. This naturally motivated us to increase the e ciency in testing solvers for optimization problems and to automatize as m ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
Testing is a crucial part of software development in general, and hence also in mathematical programming. Unfortunately, it is often a time consuming and little exciting activity. This naturally motivated us to increase the e ciency in testing solvers for optimization problems and to automatize as much of the procedure as possible. Keywords: test environment, optimization, solver benchmarking, solver comparison The testing procedure typically consists of three basic tasks: a) organize test problem sets, also called test libraries; b) solve selected test problems with selected solvers; c) analyze, check and compare the results. The Test Environment is a graphical user interface (GUI) that enables to manage the tasks a) and b) interactively, and task c) automatically. The Test Environment is particularly designed for users who seek to 1. adjust solver parameters, or 2. compare solvers on single problems, or 3. evaluate solvers on suitable test sets.
Convergence analysis of sampling methods for perturbed Lipschitz functions
 Pacific J. Opt
"... Abstract. In this short note we observe that results of Dennis and Audet extend naturally to a wide variety of deterministic sampling methods. For boundconstrained problems, we show that any method based on coordinate search which includes a sufficiently rich set of directions, for example random d ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Abstract. In this short note we observe that results of Dennis and Audet extend naturally to a wide variety of deterministic sampling methods. For boundconstrained problems, we show that any method based on coordinate search which includes a sufficiently rich set of directions, for example random directions at each state of the sampling, will, when applied to Lipschitz continuous problems, have cluster points that satisfy generalized necessary conditions for optimality. The results also apply to the case of more general constraints, including socalled “hidden ” or “yesno ” constraints.
Tribes, C.: Reducing the number of function evaluations in mesh adaptive direct search algorithms
 SIAM J. Optim
, 2014
"... Abstract: The Mesh Adaptive Direct Search (MADS) class of algorithms is designed for nonsmooth optimization, where the objective function and constraints are typically computed by launching a timeconsuming computer simulation. Each iteration of a MADS algorithm attempts to improve the current best ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract: The Mesh Adaptive Direct Search (MADS) class of algorithms is designed for nonsmooth optimization, where the objective function and constraints are typically computed by launching a timeconsuming computer simulation. Each iteration of a MADS algorithm attempts to improve the current bestknown solution by launching the simulation at a finite number of trial points. Common implementations of MADS generate 2n trial points at each iteration, where n is the number of variables in the optimization problem. The objective of the present work is to reduce that number. We present an algorithmic framework that reduces the number of simulations to exactly n + 1, without impacting the theoretical guarantees from the convergence analysis. Numerical experiments are conducted for several different contexts; the results suggest that these strategies allow the new algorithms to reach a better solution with fewer function evaluations.
Sobolev seminorm of quadratic functions with applications to derivativefree optimization (submitted
 Institute of Computational Mathematics and Scientific/Engineering Computation, Chinese Academy of Sciences
, 2011
"... Abstract This paper studies the H 1 Sobolev seminorm of quadratic functions. The research is motivated by the leastnorm interpolation that is widely used in derivativefree optimization. We express the H 1 seminorm of a quadratic function explicitly in terms of the Hessian and the gradient when th ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract This paper studies the H 1 Sobolev seminorm of quadratic functions. The research is motivated by the leastnorm interpolation that is widely used in derivativefree optimization. We express the H 1 seminorm of a quadratic function explicitly in terms of the Hessian and the gradient when the underlying domain is a ball. The seminorm gives new insights into leastnorm interpolation. It clarifies the analytical and geometrical meaning of the objective function in leastnorm interpolation. We employ the seminorm to study the extended symmetric Broyden update proposed by Powell. Numerical results show that the new thoery helps improve the performance of the update. Apart from the theoretical results, we propose a new method of comparing derivativefree solvers, which is more convincing than merely counting the numbers of function evaluations. Keywords Sobolev seminorm · Leastnorm interpolation · Derivativefree optimization · Extended symmetric Broyden update Mathematics Subject Classification (2010) 90C56 · 90C30 · 65K05
www.scielo.br/cam Activeset strategy in Powell’s methodfor optimization without derivatives
"... Abstract. In this article we present an algorithm for solving bound constrained optimization problems without derivatives based on Powell’s method [38] for derivativefree optimization. First we consider the unconstrained optimization problem. At each iteration a quadratic interpolation model of t ..."
Abstract
 Add to MetaCart
Abstract. In this article we present an algorithm for solving bound constrained optimization problems without derivatives based on Powell’s method [38] for derivativefree optimization. First we consider the unconstrained optimization problem. At each iteration a quadratic interpolation model of the objective function is constructed around the current iterate and this model is minimized to obtain a new trial point. The whole process is embedded within a trustregion framework. Our algorithm uses infinity norm instead of the Euclidean norm and we solve a box constrained quadratic subproblem using an activeset strategy to explore faces of the box. Therefore, a bound constrained optimization algorithm is easily extended. We compare our implementation with NEWUOA and BOBYQA, Powell’s algorithms for unconstrained and bound constrained derivative free optimization respectively. Numerical experiments show that, in general, our algorithm require less functional evaluations than Powell’s algorithms.