Results 1  10
of
22
STABILIZED SEQUENTIAL QUADRATIC PROGRAMMING FOR OPTIMIZATION AND A STABILIZED NEWTONTYPE METHOD FOR VARIATIONAL PROBLEMS WITHOUT CONSTRAINT QUALIFICATIONS
, 2007
"... The stabilized version of the sequential quadratic programming algorithm (sSQP) had been developed in order to achieve fast convergence despite possible degeneracy of constraints of optimization problems, when the Lagrange multipliers associated to a solution are not unique. Superlinear convergence ..."
Abstract

Cited by 24 (14 self)
 Add to MetaCart
(Show Context)
The stabilized version of the sequential quadratic programming algorithm (sSQP) had been developed in order to achieve fast convergence despite possible degeneracy of constraints of optimization problems, when the Lagrange multipliers associated to a solution are not unique. Superlinear convergence of sSQP had been previously established under the secondorder sufficient condition for optimality (SOSC) and the MangasarianFromovitz constraint qualification, or under the strong secondorder sufficient condition for optimality (in that case, without constraint qualification assumptions). We prove a stronger superlinear convergence result than the above, assuming SOSC only. In addition, our analysis is carried out in the more general setting of variational problems, for which we introduce a natural extension of sSQP techniques. In the process, we also obtain a new error bound for KarushKuhnTucker systems for variational problems.
On attraction of Newtontype iterates to multipliers violating secondorder sufficiency conditions
, 2009
"... Assuming that the primal part of the sequence generated by a Newtontype (e.g., SQP) method applied to an equalityconstrained problem converges to a solution where the constraints are degenerate, we investigate whether the dual part of the sequence is attracted by those Lagrange multipliers which s ..."
Abstract

Cited by 20 (15 self)
 Add to MetaCart
Assuming that the primal part of the sequence generated by a Newtontype (e.g., SQP) method applied to an equalityconstrained problem converges to a solution where the constraints are degenerate, we investigate whether the dual part of the sequence is attracted by those Lagrange multipliers which satisfy secondorder sufficient condition (SOSC) for optimality, or by those multipliers which violate it. This question is relevant at least for two reasons: one is speed of convergence of standard methods; the other is applicability of some recently proposed approaches for handling degenerate constraints. We show that for the class of damped Newton methods, convergence of the dual sequence to multipliers satisfying SOSC is unlikely to occur. We support our findings by numerical experiments. We also suggest a simple auxiliary procedure for computing multiplier estimates, which does not have this
NEWTONTYPE METHODS FOR OPTIMIZATION PROBLEMS WITHOUT CONSTRAINT QUALIFICATIONS
 SIAM J. OPTIMIZATION
, 2004
"... We consider equalityconstrained optimization problems, where a given solution may not satisfy any constraint qualification, but satisfies the standard secondorder sufficient condition for optimality. Based on local identification of the rank of the constraints degeneracy via the singularvalue d ..."
Abstract

Cited by 17 (13 self)
 Add to MetaCart
(Show Context)
We consider equalityconstrained optimization problems, where a given solution may not satisfy any constraint qualification, but satisfies the standard secondorder sufficient condition for optimality. Based on local identification of the rank of the constraints degeneracy via the singularvalue decomposition, we derive a modified primaldual optimality system whose solution is locally unique, nondegenerate, and thus can be found by standard Newtontype techniques. Using identification of active constraints, we further extend our approach to mixed equality and inequalityconstrained problems, and to mathematical programs with complementarity constraints (MPCC). In particular, for MPCC we obtain a local algorithm with quadratic convergence under the secondorder sufficient condition only, without any constraint qualifications, not even the special MPCC constraint qualifications.
A PRIMALDUAL AUGMENTED LAGRANGIAN
, 2008
"... Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we discuss the formulation of subproblems in which the objective is a primaldual generalization of the HestenesPowell augmented Lagrangi ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we discuss the formulation of subproblems in which the objective is a primaldual generalization of the HestenesPowell augmented Lagrangian function. This generalization has the crucial feature that it is minimized with respect to both the primal and the dual variables simultaneously. A benefit of this approach is that the quality of the dual variables is monitored explicitly during the solution of the subproblem. Moreover, each subproblem may be regularized by imposing explicit bounds on the dual variables. Two primaldual variants of conventional primal methods are proposed: a primaldual bound constrained Lagrangian (pdBCL) method and a primaldual ℓ1 linearly constrained Lagrangian (pdℓ1LCL) method.
Examples of dual behaviour of Newtontype methods on optimization problems with degenerate constraints
 Computational Optimization and Applications
"... discuss possible scenarios of behaviour of the dual part of sequences generated by primaldual Newtontype methods when applied to optimization problems with nonunique multipliers associated to a solution. Those scenarios are: (a) failure of convergence of the dual sequence; (b) convergence to a so ..."
Abstract

Cited by 16 (10 self)
 Add to MetaCart
(Show Context)
discuss possible scenarios of behaviour of the dual part of sequences generated by primaldual Newtontype methods when applied to optimization problems with nonunique multipliers associated to a solution. Those scenarios are: (a) failure of convergence of the dual sequence; (b) convergence to a socalled critical multiplier (which, in particular, violates some secondorder sufficient conditions for optimality), the latter appearing to be a typical scenario when critical multipliers exist; (c) convergence to a noncritical multiplier. The case of mathematical programs with complementarity constraints is also discussed. We illustrate those scenarios with examples, and discuss consequences for the speed of convergence. We also put together a collection of examples of optimization problems with constraints violating some standard constraint qualifications, intended for preliminary testing of existing algorithms on degenerate problems, or for developing special new algorithms designed to deal with constraints degeneracy. Keywords Degenerate constraints · Secondorder sufficiency · Newton method · SQP
A NOTE ON UPPER LIPSCHITZ STABILITY, ERROR BOUNDS, AND CRITICAL MULTIPLIERS FOR LIPSCHITZCONTINUOUS KKT SYSTEMS
, 2012
"... We prove a new local upper Lipschitz stability result and the associated local error bound for solutions of parametric Karush–Kuhn–Tucker systems corresponding to variational problems with Lipschitzian base mappings and constraints possessing Lipschitzian derivatives, and without any constraint qual ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
(Show Context)
We prove a new local upper Lipschitz stability result and the associated local error bound for solutions of parametric Karush–Kuhn–Tucker systems corresponding to variational problems with Lipschitzian base mappings and constraints possessing Lipschitzian derivatives, and without any constraint qualifications. This property is equivalent to the appropriately extended to this nonsmooth setting notion of noncriticality of the Lagrange multiplier associated to the primal solution, which is weaker than secondorder sufficiency. All this extends several results previously known only for optimization problems with twice differentiable data, or assuming some constraint qualifications. In addition, our results are obtained in the more general variational setting.
GLOBAL CONVERGENCE OF AUGMENTED LAGRANGIAN METHODS APPLIED TO OPTIMIZATION PROBLEMS WITH DEGENERATE CONSTRAINTS, INCLUDING PROBLEMS WITH COMPLEMENTARITY CONSTRAINTS
, 2012
"... We consider global convergence properties of the augmented Lagrangian methods on problems with degenerate constraints, with a special emphasis on mathematical programs with complementarity constraints (MPCC). In the general case, we show convergence to stationary points of the problem under an error ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
We consider global convergence properties of the augmented Lagrangian methods on problems with degenerate constraints, with a special emphasis on mathematical programs with complementarity constraints (MPCC). In the general case, we show convergence to stationary points of the problem under an error bound condition for the feasible set (which is weaker than constraint qualifications), assuming that the iterates have some modest features of approximate local minimizers of the augmented Lagrangian. For MPCC, we first argue that even weak forms of general constraint qualifications that are suitable for convergence of the augmented Lagrangian methods, such as the recently proposed relaxed positive linear dependence condition, should not be expected to hold and thus special analysis is needed. We next obtain a rather complete picture, showing that under the usual in this context MPCClinear independence constraint qualification accumulation points of the iterates are guaranteed to be Cstationary for MPCC (better than weakly stationary), but in general need not be Mstationary (hence, neither strongly stationary). However, strong stationarity is guaranteed if the generated dual sequence is bounded, which we show to be the typical
Stabilized SQP revisited
 MATH. PROGRAM., SER. A
, 2010
"... The stabilized version of the sequential quadratic programming algorithm (sSQP) had been developed in order to achieve superlinear convergence in situations when the Lagrange multipliers associated to a solution are not unique. Within the framework of Fischer (Math Program 94:91–124, 2002), the key ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
The stabilized version of the sequential quadratic programming algorithm (sSQP) had been developed in order to achieve superlinear convergence in situations when the Lagrange multipliers associated to a solution are not unique. Within the framework of Fischer (Math Program 94:91–124, 2002), the key to local superlinear convergence of sSQP are the following two properties: upper Lipschitzian behavior of solutions of the KarushKuhnTucker (KKT) system under canonical perturbations and local solvability of sSQP subproblems with the associated primaldual step being of the order of the distance from the current iterate to the solution set of the unperturbed KKT system. According to Fernández and Solodov (Math Program 125:47–73, 2010), both of these properties are ensured by the secondorder sufficient optimality condition (SOSC) without any constraint qualification assumptions. In this paper, we state precise relationships between the upper Lipschitzian property of solutions of KKT systems, error bounds for KKT systems, the notion of critical Lagrange multipliers (a subclass of multipliers that violate SOSC in a very special way), the secondorder necessary condition for optimality, and solvability of sSQP subproblems. Moreover,
A REGULARIZED SQP METHOD WITH CONVERGENCE TO SECONDORDER OPTIMAL POINTS
, 2013
"... Regularized and stabilized sequential quadratic programming methods are two classes of sequential quadratic programming (SQP) methods designed to resolve the numerical and theoretical difficulties associated with illposed or degenerate nonlinear optimization problems. Recently, a regularized SQP me ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Regularized and stabilized sequential quadratic programming methods are two classes of sequential quadratic programming (SQP) methods designed to resolve the numerical and theoretical difficulties associated with illposed or degenerate nonlinear optimization problems. Recently, a regularized SQP method has been proposed that provides a strong connection between augmented Lagrangian methods and stabilized SQP methods. The method is formulated as a regularized SQP method with an implicit safeguarding strategy based on minimizing a boundconstrained primaldual augmented Lagrangian. Each iteration involves the solution of a regularized quadratic program (QP) that is equivalent to a strictly convex boundconstrained QP based on minimizing a quadratic model of the augmented Lagrangian. The solution of the QP subproblem defines a descent direction for a flexible line search that provides a sufficient decrease in a primaldual augmented Lagrangian merit function. Under certain conditions, the method is guaranteed to converge to a point satisfying the firstorder KarushKuhnTucker (KKT) conditions. In this paper, the regularized SQP method is extended to allow convergence to points satisfying certain secondorder KKT
Computable primal error bounds based on the augmented Lagrangian and Lagrangian relaxation algorithms
, 2006
"... For a given iterate generated by the augmented Lagrangian or the Lagrangian relaxation based method, we derive estimates for the distance to the primal solution of the underlying optimization problem. The estimates are obtained using some recent contributions to the sensitivity theory, under appropr ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
For a given iterate generated by the augmented Lagrangian or the Lagrangian relaxation based method, we derive estimates for the distance to the primal solution of the underlying optimization problem. The estimates are obtained using some recent contributions to the sensitivity theory, under appropriate first or second order sufficient optimality conditions. The given estimates hold in situations where known (algorithmindependent) error bounds may not apply. Examples are provided which show that the estimates are sharp.