Results 1  10
of
16
REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS
, 2011
"... We present the formulation and analysis of a new sequential quadratic programming (SQP) method for general nonlinearly constrained optimization. The method pairs a primaldual generalized augmented Lagrangian merit function with a flexible line search to obtain a sequence of improving estimates of t ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
We present the formulation and analysis of a new sequential quadratic programming (SQP) method for general nonlinearly constrained optimization. The method pairs a primaldual generalized augmented Lagrangian merit function with a flexible line search to obtain a sequence of improving estimates of the solution. This function is a primaldual variant of the augmented Lagrangian proposed by Hestenes and Powell in the early 1970s. A crucial feature of the method is that the QP subproblems are convex, but formed from the exact second derivatives of the original problem. This is in contrast to methods that use a less accurate quasiNewton approximation. Additional benefits of this approach include the following: (i) each QP subproblem is regularized; (ii) the QP subproblem always has a known feasible point; and (iii) a projected gradient method may be used to identify the QP active set when far from the solution.
SHARP PRIMAL SUPERLINEAR CONVERGENCE RESULTS FOR SOME NEWTONIAN METHODS FOR CONSTRAINED OPTIMIZATION
, 2009
"... As is well known, superlinear or quadratic convergence of the primaldual sequence generated by an optimization algorithm does not, in general, imply superlinear convergence of the primal part. Primal convergence, however, is often of particular interest. For the sequential quadratic programming (SQ ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
(Show Context)
As is well known, superlinear or quadratic convergence of the primaldual sequence generated by an optimization algorithm does not, in general, imply superlinear convergence of the primal part. Primal convergence, however, is often of particular interest. For the sequential quadratic programming (SQP) algorithm, local primaldual quadratic convergence can be established under the assumptions of uniqueness of the Lagrange multiplier associated to the solution and the secondorder sufficient condition. At the same time, previous primal superlinear convergence results for SQP required to strengthen the first assumption to the linear independence constraint qualification. In this paper, we show that this strengthening of assumptions is actually not necessary. Specifically, we show that once primaldual convergence is assumed or already established, for primal superlinear rate one only needs a certain error bound estimate. This error bound holds, for example, under the secondorder sufficient condition, which is needed for primaldual local analysis in any case. Moreover, in some situations even secondorder sufficiency can be relaxed to the weaker assumption that the multiplier in question is noncritical. Our study is performed for a rather general perturbed SQP framework, which covers in addition to SQP and quasiNewton SQP some other algorithms as well. For example, as a byproduct,
A NOTE ON UPPER LIPSCHITZ STABILITY, ERROR BOUNDS, AND CRITICAL MULTIPLIERS FOR LIPSCHITZCONTINUOUS KKT SYSTEMS
, 2012
"... We prove a new local upper Lipschitz stability result and the associated local error bound for solutions of parametric Karush–Kuhn–Tucker systems corresponding to variational problems with Lipschitzian base mappings and constraints possessing Lipschitzian derivatives, and without any constraint qual ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
(Show Context)
We prove a new local upper Lipschitz stability result and the associated local error bound for solutions of parametric Karush–Kuhn–Tucker systems corresponding to variational problems with Lipschitzian base mappings and constraints possessing Lipschitzian derivatives, and without any constraint qualifications. This property is equivalent to the appropriately extended to this nonsmooth setting notion of noncriticality of the Lagrange multiplier associated to the primal solution, which is weaker than secondorder sufficiency. All this extends several results previously known only for optimization problems with twice differentiable data, or assuming some constraint qualifications. In addition, our results are obtained in the more general variational setting.
GLOBAL CONVERGENCE OF AUGMENTED LAGRANGIAN METHODS APPLIED TO OPTIMIZATION PROBLEMS WITH DEGENERATE CONSTRAINTS, INCLUDING PROBLEMS WITH COMPLEMENTARITY CONSTRAINTS
, 2012
"... We consider global convergence properties of the augmented Lagrangian methods on problems with degenerate constraints, with a special emphasis on mathematical programs with complementarity constraints (MPCC). In the general case, we show convergence to stationary points of the problem under an error ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
We consider global convergence properties of the augmented Lagrangian methods on problems with degenerate constraints, with a special emphasis on mathematical programs with complementarity constraints (MPCC). In the general case, we show convergence to stationary points of the problem under an error bound condition for the feasible set (which is weaker than constraint qualifications), assuming that the iterates have some modest features of approximate local minimizers of the augmented Lagrangian. For MPCC, we first argue that even weak forms of general constraint qualifications that are suitable for convergence of the augmented Lagrangian methods, such as the recently proposed relaxed positive linear dependence condition, should not be expected to hold and thus special analysis is needed. We next obtain a rather complete picture, showing that under the usual in this context MPCClinear independence constraint qualification accumulation points of the iterates are guaranteed to be Cstationary for MPCC (better than weakly stationary), but in general need not be Mstationary (hence, neither strongly stationary). However, strong stationarity is guaranteed if the generated dual sequence is bounded, which we show to be the typical
Stabilized SQP revisited
 MATH. PROGRAM., SER. A
, 2010
"... The stabilized version of the sequential quadratic programming algorithm (sSQP) had been developed in order to achieve superlinear convergence in situations when the Lagrange multipliers associated to a solution are not unique. Within the framework of Fischer (Math Program 94:91–124, 2002), the key ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
The stabilized version of the sequential quadratic programming algorithm (sSQP) had been developed in order to achieve superlinear convergence in situations when the Lagrange multipliers associated to a solution are not unique. Within the framework of Fischer (Math Program 94:91–124, 2002), the key to local superlinear convergence of sSQP are the following two properties: upper Lipschitzian behavior of solutions of the KarushKuhnTucker (KKT) system under canonical perturbations and local solvability of sSQP subproblems with the associated primaldual step being of the order of the distance from the current iterate to the solution set of the unperturbed KKT system. According to Fernández and Solodov (Math Program 125:47–73, 2010), both of these properties are ensured by the secondorder sufficient optimality condition (SOSC) without any constraint qualification assumptions. In this paper, we state precise relationships between the upper Lipschitzian property of solutions of KKT systems, error bounds for KKT systems, the notion of critical Lagrange multipliers (a subclass of multipliers that violate SOSC in a very special way), the secondorder necessary condition for optimality, and solvability of sSQP subproblems. Moreover,
A REGULARIZED SQP METHOD WITH CONVERGENCE TO SECONDORDER OPTIMAL POINTS
, 2013
"... Regularized and stabilized sequential quadratic programming methods are two classes of sequential quadratic programming (SQP) methods designed to resolve the numerical and theoretical difficulties associated with illposed or degenerate nonlinear optimization problems. Recently, a regularized SQP me ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Regularized and stabilized sequential quadratic programming methods are two classes of sequential quadratic programming (SQP) methods designed to resolve the numerical and theoretical difficulties associated with illposed or degenerate nonlinear optimization problems. Recently, a regularized SQP method has been proposed that provides a strong connection between augmented Lagrangian methods and stabilized SQP methods. The method is formulated as a regularized SQP method with an implicit safeguarding strategy based on minimizing a boundconstrained primaldual augmented Lagrangian. Each iteration involves the solution of a regularized quadratic program (QP) that is equivalent to a strictly convex boundconstrained QP based on minimizing a quadratic model of the augmented Lagrangian. The solution of the QP subproblem defines a descent direction for a flexible line search that provides a sufficient decrease in a primaldual augmented Lagrangian merit function. Under certain conditions, the method is guaranteed to converge to a point satisfying the firstorder KarushKuhnTucker (KKT) conditions. In this paper, the regularized SQP method is extended to allow convergence to points satisfying certain secondorder KKT
Stabilized SQP revisited
, 2010
"... The stabilized version of the sequential quadratic programming algorithm (sSQP) had been developed in order to achieve superlinear convergence in situations when the Lagrange multipliers associated to a solution are not unique. Within the framework of [11], the key to local superlinear convergence o ..."
Abstract
 Add to MetaCart
(Show Context)
The stabilized version of the sequential quadratic programming algorithm (sSQP) had been developed in order to achieve superlinear convergence in situations when the Lagrange multipliers associated to a solution are not unique. Within the framework of [11], the key to local superlinear convergence of sSQP are the following two properties: upper Lipschitzian behavior of solutions of the KarushKuhnTucker (KKT) system under canonical perturbations and local solvability of sSQP subproblems with the associated primaldual step being of the order of the distance from the current iterate to the solution set of the unperturbed KKT system. According to [9], both of these properties are ensured by the secondorder sufficient optimality condition (SOSC) without any constraint qualification assumptions. In this paper, we state precise relationships between the upper Lipschitzian property of solutions of KKT systems, error bounds for KKT systems, the notion of critical Lagrange multipliers (a subclass of multipliers that violate SOSC in a very special way), the secondorder necessary condition for optimality, and solvability of sSQP subproblems. Moreover, for the problem with equality constraints only, we prove superlinear convergence of sSQP under the assumption that the dual starting point is close to a noncritical multiplier. Since noncritical multipliers include all those satisfying SOSC but are not
Adaptive Augmented Lagrangian Methods for LargeScale Equality Constrained Optimization
, 2012
"... We propose an augmented Lagrangian algorithm for solving largescale equality constrained optimization problems. The novel feature of the algorithm is an adaptive update for the penalty parameter motivated by recently proposed techniques for exact penalty methods. This adaptive updating scheme grea ..."
Abstract
 Add to MetaCart
(Show Context)
We propose an augmented Lagrangian algorithm for solving largescale equality constrained optimization problems. The novel feature of the algorithm is an adaptive update for the penalty parameter motivated by recently proposed techniques for exact penalty methods. This adaptive updating scheme greatly improves the overall performance of the algorithm without sacrificing the strengths of the core augmented Lagrangian framework, such as its attractive local convergence behavior and ability to be implemented matrixfree. This latter strength is particularly important due to interests in employing augmented Lagrangian algorithms for solving largescale optimization problems. We focus on a trust region algorithm, but also propose a line search algorithm that employs the same adaptive penalty parameter updating scheme. We provide theoretical results related to the global convergence behavior of our algorithms and illustrate by a set of numerical experiments that they outperform traditional augmented Lagrangian methods in terms of critical performance measures.
PRONEX–Optimization, and by FAPERJ.
, 2013
"... Local convergence of the method of multipliers for variational and optimization problems under the noncriticality assumption ..."
Abstract
 Add to MetaCart
(Show Context)
Local convergence of the method of multipliers for variational and optimization problems under the noncriticality assumption
SOME COMPOSITESTEP CONSTRAINED OPTIMIZATION METHODS INTERPRETED VIA THE PERTURBED SEQUENTIAL QUADRATIC PROGRAMMING FRAMEWORK
, 2013
"... We consider the inexact restoration and the compositestep sequential quadratic programming (SQP) methods, and relate them to the socalled perturbed SQP framework. In particular, iterations of the methods in question are interpreted as certain structured perturbations of the basic SQP iterations. T ..."
Abstract
 Add to MetaCart
(Show Context)
We consider the inexact restoration and the compositestep sequential quadratic programming (SQP) methods, and relate them to the socalled perturbed SQP framework. In particular, iterations of the methods in question are interpreted as certain structured perturbations of the basic SQP iterations. This gives a different insight into local behaviour of those algorithms, as well as improved or different local convergence and rate of convergence results. Key words: sequential quadratic programming; inexact restoration; perturbed SQP; compositestep SQP; superlinear convergence.