Results 1  10
of
63
Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods
 SIAM REVIEW VOL. 45, NO. 3, PP. 385–482
, 2003
"... Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked ..."
Abstract

Cited by 237 (15 self)
 Add to MetaCart
(Show Context)
Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed computing. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow generalization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.
A Rigorous Framework for Optimization of Expensive Functions by Surrogates
, 1998
"... The goal of the research reported here is to develop rigorous optimization algorithms to apply to some engineering design problems for which direct application of traditional optimization approaches is not practical. This paper presents and analyzes a framework for generating a sequence of approxima ..."
Abstract

Cited by 204 (15 self)
 Add to MetaCart
The goal of the research reported here is to develop rigorous optimization algorithms to apply to some engineering design problems for which direct application of traditional optimization approaches is not practical. This paper presents and analyzes a framework for generating a sequence of approximations to the objective function and managing the use of these approximations as surrogates for optimization. The result is to obtain convergence to a minimizer of an expensive objective function subject to simple constraints. The approach is widely applicable because it does not require, or even explicitly approximate, derivatives of the objective. Numerical results are presented for a 31variable helicopter rotor blade design example and for a standard optimization test example.
Mesh adaptive direct search algorithms for constrained optimization
 SIAM J. OPTIM
, 2004
"... This paper introduces the Mesh Adaptive Direct Search (MADS) class of algorithms for nonlinear optimization. MADS extends the Generalized Pattern Search (GPS) class by allowing local exploration, called polling, in a dense set of directions in the space of optimization variables. This means that u ..."
Abstract

Cited by 146 (15 self)
 Add to MetaCart
This paper introduces the Mesh Adaptive Direct Search (MADS) class of algorithms for nonlinear optimization. MADS extends the Generalized Pattern Search (GPS) class by allowing local exploration, called polling, in a dense set of directions in the space of optimization variables. This means that under certain hypotheses, including a weak constraint qualification due to Rockafellar, MADS can treat constraints by the extreme barrier approach of setting the objective to infinity for infeasible points and treating the problem as unconstrained. The main GPS convergence result is to identify limit points where the Clarke generalized derivatives are nonnegative in a finite set of directions, called refining directions. Although in the unconstrained case, nonnegative combinations of these directions spans the whole space, the fact that there can only be finitely many GPS refining directions limits rigorous justification of the barrier approach to finitely many constraints for GPS. The MADS class of algorithms extend this result; the set of refining directions may even be dense in R n, although we give an example where it is not. We present an implementable instance of MADS, and we illustrate and compare it with GPS on some test problems. We also illustrate the limitation of our results with examples.
Pattern Search Methods for Linearly Constrained Minimization
, 2000
"... We extend pattern search methods to linearly constrained minimization. We develop a general class of feasible point pattern search algorithms and prove global convergence to a Karush–Kuhn–Tucker point. As in the case of unconstrained minimization, pattern search methods for linearly constrained pr ..."
Abstract

Cited by 75 (6 self)
 Add to MetaCart
We extend pattern search methods to linearly constrained minimization. We develop a general class of feasible point pattern search algorithms and prove global convergence to a Karush–Kuhn–Tucker point. As in the case of unconstrained minimization, pattern search methods for linearly constrained problems accomplish this without explicit recourse to the gradient or the directional derivative of the objective. Key to the analysis of the algorithms is the way in which the local search patterns conform to the geometry of the boundary of the feasible region.
Posing Polygonal Objects in the Plane by Pushing
 In IEEE International Conference on Robotics and Automation
, 1992
"... This paper studies the use of pushing actions with a fence to orient and translate objects in the plane. It describes a planner which is guaranteed to construct a sequence of pushing actions to move any polygonal object from any initial configuration to any final configuration. This planner, which u ..."
Abstract

Cited by 70 (5 self)
 Add to MetaCart
This paper studies the use of pushing actions with a fence to orient and translate objects in the plane. It describes a planner which is guaranteed to construct a sequence of pushing actions to move any polygonal object from any initial configuration to any final configuration. This planner, which utilizes an analysis of the mechanics of pushing an object, generates openloop plans which do not require feedback sensing. These plans are guaranteed to succeed provided certain physical assumptions are met. We present results of experiments conducted to demonstrate the generated plans. 1 Introduction The manipulation of objects restricted to motions in the plane is important in cases where the object cannot be grasped or it is more efficient to move the object in the plane. An example is planar parts transfer, where parts are to be moved from one position to another in the plane, often with a change in orientation. In this paper, we develop a method to find openloop plans, which do not r...
Filter Pattern Search Algorithms for Mixed Variable Constrained Optimization Problems
 SIAM Journal on Optimization
, 2004
"... A new class of algorithms for solving nonlinearly constrained mixed variable optimization problems is presented. This class combines and extends the AudetDennis Generalized Pattern Search (GPS) algorithms for bound constrained mixed variable optimization, and their GPSfilter algorithms for gene ..."
Abstract

Cited by 55 (6 self)
 Add to MetaCart
(Show Context)
A new class of algorithms for solving nonlinearly constrained mixed variable optimization problems is presented. This class combines and extends the AudetDennis Generalized Pattern Search (GPS) algorithms for bound constrained mixed variable optimization, and their GPSfilter algorithms for general nonlinear constraints. In generalizing existing algorithms, new theoretical convergence results are presented that reduce seamlessly to existing results for more specific classes of problems. While no local continuity or smoothness assumptions are required to apply the algorithm, a hierarchy of theoretical convergence results based on the Clarke calculus is given, in which local smoothness dictate what can be proved about certain limit points generated by the algorithm. To demonstrate the usefulness of the algorithm, the algorithm is applied to the design of a loadbearing thermal insulation system. We believe this is the first algorithm with provable convergence results to directly target this class of problems.
A particle swarm pattern search method for bound constrained nonlinear optimization
, 2006
"... ..."
Rank ordering and positive bases in pattern search algorithms
 Institute for Computer
, 1996
"... We present two new classes of pattern search algorithms for unconstrained minimization: the rank ordered and the positive basis pattern search methods. These algorithms can nearly halve the worst case cost of an iteration compared to the classical pattern search algorithms. The rank ordered patter ..."
Abstract

Cited by 39 (11 self)
 Add to MetaCart
(Show Context)
We present two new classes of pattern search algorithms for unconstrained minimization: the rank ordered and the positive basis pattern search methods. These algorithms can nearly halve the worst case cost of an iteration compared to the classical pattern search algorithms. The rank ordered pattern search methods are based on a heuristic for approximating the direction of steepest descent, while the positive basis pattern search methods are motivated by a generalization of the geometry characteristic of the patterns of the classical methods. We describe the new classes of algorithms and present the attendant global convergence analysis. * This research was supported by the National Aeronautics and Space Administration under NASA
USING SAMPLING AND SIMPLEX DERIVATIVES IN PATTERN SEARCH METHODS (COMPLETE NUMERICAL RESULTS)
"... In this paper, we introduce a number of ways of making pattern search more efficient by reusing previous evaluations of the objective function, based on the computation of simplex derivatives (e.g., simplex gradients). At each iteration, one can attempt to compute an accurate simplex gradient by id ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
In this paper, we introduce a number of ways of making pattern search more efficient by reusing previous evaluations of the objective function, based on the computation of simplex derivatives (e.g., simplex gradients). At each iteration, one can attempt to compute an accurate simplex gradient by identifying a sampling set of previously evaluated points with good geometrical properties. This can be done using only past successful iterates or by considering all past function evaluations. The simplex gradient can then be used to reorder the evaluations of the objective function associated with the directions used in the poll step or to update the mesh size parameter according to a sufficient decrease criterion, neither of which requires new function evaluations. A search step can also be tried along the negative simplex gradient at the beginning of the current pattern search iteration. We present these procedures in detail and apply them to a set of problems from the CUTEr collection. Numerical results show that these procedures can enhance significantly the practical performance of pattern search methods.