Results 1 
8 of
8
Worst case complexity of direct search
, 2010
"... In this paper we prove that direct search of directional type shares the worst case complexity bound of steepest descent when sufficient decrease is imposed using a quadratic function of the step size parameter. This result is proved under smoothness of the objective function and using a framework o ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
(Show Context)
In this paper we prove that direct search of directional type shares the worst case complexity bound of steepest descent when sufficient decrease is imposed using a quadratic function of the step size parameter. This result is proved under smoothness of the objective function and using a framework of the type of GSS (generating set search). We also discuss the worst case complexity of direct search when only simple decrease is imposed and when the objective function is nonsmooth.
Penalty Methods with Stochastic Approximation for Stochastic Nonlinear Programming
, 2013
"... ..."
(Show Context)
A DerivativeFree CoMirror Algorithm for Convex Optimization
, 2014
"... We consider the minimization of a nonsmooth convex function over a compact convex set subject to a nonsmooth convex constraint. We work in the setting of derivativefree optimization (DFO), assuming that the objective and constraint functions are available through a blackbox that provides function ..."
Abstract
 Add to MetaCart
We consider the minimization of a nonsmooth convex function over a compact convex set subject to a nonsmooth convex constraint. We work in the setting of derivativefree optimization (DFO), assuming that the objective and constraint functions are available through a blackbox that provides function values for lowerC2 representation of the functions. Our approach is based on a DFO adaptation of the comirror algorithm [6]. Algorithmic convergence hinges on the ability to accurately approximate subgradients of lowerC2 functions, which we prove is possible through linear interpolation. We show that, if the sampling radii for linear interpolation are properly selected, then the new algorithm has the same convergence rate as the original gradientbased algorithm. This provides a novel global rateofconvergence result for nonsmooth convex DFO with nonsmooth convex constraints. We conclude with numerical testing that demonstrates the practical feasibility of the algorithm and some directions for further research.
Worst Case Complexity of Direct Search under Convexity
, 2013
"... In this paper we prove that the broad class of directsearch methods of directional type, based on imposing sufficient decrease to accept new iterates, exhibits the same global rate or worst case complexity bound of the gradient method for the unconstrained minimization of a convex and smooth functi ..."
Abstract
 Add to MetaCart
(Show Context)
In this paper we prove that the broad class of directsearch methods of directional type, based on imposing sufficient decrease to accept new iterates, exhibits the same global rate or worst case complexity bound of the gradient method for the unconstrained minimization of a convex and smooth function. More precisely, it will be shown that the number of iterations needed to reduce the norm of the gradient of the objective function below a certain threshold is at most proportional to the inverse of the threshold. Our result is slightly less general than Nesterov’s for the gradient method, in the sense that we require more than just convexity of the objective function and boundedness of the initial iterate to the solution set. Our additional condition can, however, be satisfied in several scenarios, such as strong or uniform convexity, boundedness of the initial level set, or boundedness of the distance from the initial contour set to the solution set. It is a mild price to pay for deriving such a global rate for zeroorder methods.
PrePublicac~oes do Departamento de Matematica Universidade de Coimbra Preprint Number 15{27 A SECONDORDER GLOBALLY CONVERGENT DIRECTSEARCH METHOD AND ITS WORSTCASE COMPLEXITY
"... Abstract: Directsearch algorithms form one of the main classes of algorithms for smooth unconstrained derivativefree optimization, due to their simplicity and their wellestablished convergence results. They proceed by iteratively looking for improvement along some vectors or directions. In the pr ..."
Abstract
 Add to MetaCart
Abstract: Directsearch algorithms form one of the main classes of algorithms for smooth unconstrained derivativefree optimization, due to their simplicity and their wellestablished convergence results. They proceed by iteratively looking for improvement along some vectors or directions. In the presence of smoothness, rstorder global convergence comes from the ability of the vectors to approximate the steepest descent direction, which can be quantied by a rstorder criticality (cosine) measure. The use of a set of vectors with a positive cosine measure together with the imposition of a sufficient decrease condition to accept new iterates leads to a convergence result as well as a worstcase complexity bound. In this paper, we present a secondorder study of a general class of directsearch methods. We start by proving a weak secondorder convergence result related to a criticality measure dened along the directions used throughout the iterations. Extensions of this result to obtain a true secondorder optimality one are discussed, one possibility being a method using approximate Hessian eigenvectors as directions (which is proved to be truly secondorder globally convergent). Numerically guaranteeing such a convergence can be rather expensive to ensure, as it is indicated by the worstcase complexity analysis provided in this paper, but turns out to be appropriate for some pathological examples.
Worst Case Complexity of Direct Search under Convexity
, 2014
"... In this paper we prove that the broad class of directsearch methods of directional type, based on imposing sufficient decrease to accept new iterates, exhibits the same worst case complexity bound and global rate of the gradient method for the unconstrained minimization of a convex and smooth funct ..."
Abstract
 Add to MetaCart
(Show Context)
In this paper we prove that the broad class of directsearch methods of directional type, based on imposing sufficient decrease to accept new iterates, exhibits the same worst case complexity bound and global rate of the gradient method for the unconstrained minimization of a convex and smooth function. More precisely, it will be shown that the number of iterations needed to reduce the norm of the gradient of the objective function below a certain threshold is at most proportional to the inverse of the threshold. It will be also shown that the absolute error in the function values decay at a sublinear rate proportional to the inverse of the iteration counter. In addition, we prove that the sequence of absolute errors of function values and iterates converges rlinearly in the strongly convex case.