Results 1 
5 of
5
GLOBAL CONVERGENCE OF AUGMENTED LAGRANGIAN METHODS APPLIED TO OPTIMIZATION PROBLEMS WITH DEGENERATE CONSTRAINTS, INCLUDING PROBLEMS WITH COMPLEMENTARITY CONSTRAINTS
, 2012
"... We consider global convergence properties of the augmented Lagrangian methods on problems with degenerate constraints, with a special emphasis on mathematical programs with complementarity constraints (MPCC). In the general case, we show convergence to stationary points of the problem under an error ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
We consider global convergence properties of the augmented Lagrangian methods on problems with degenerate constraints, with a special emphasis on mathematical programs with complementarity constraints (MPCC). In the general case, we show convergence to stationary points of the problem under an error bound condition for the feasible set (which is weaker than constraint qualifications), assuming that the iterates have some modest features of approximate local minimizers of the augmented Lagrangian. For MPCC, we first argue that even weak forms of general constraint qualifications that are suitable for convergence of the augmented Lagrangian methods, such as the recently proposed relaxed positive linear dependence condition, should not be expected to hold and thus special analysis is needed. We next obtain a rather complete picture, showing that under the usual in this context MPCClinear independence constraint qualification accumulation points of the iterates are guaranteed to be Cstationary for MPCC (better than weakly stationary), but in general need not be Mstationary (hence, neither strongly stationary). However, strong stationarity is guaranteed if the generated dual sequence is bounded, which we show to be the typical
Secondorder negativecurvature methods for boxconstrained and general constrained optimization
, 2009
"... A Nonlinear Programming algorithm that converges to secondorder stationary points is introduced in this paper. The main tool is a secondorder negativecurvature method for boxconstrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
A Nonlinear Programming algorithm that converges to secondorder stationary points is introduced in this paper. The main tool is a secondorder negativecurvature method for boxconstrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is used to define an Augmented Lagrangian algorithm of PHR (PowellHestenesRockafellar) type. Convergence proofs under weak constraint qualifications are given. Numerical examples showing that the new method converges to secondorder stationary points in situations in which firstorder methods fail are exhibited.
On approximate KKT condition and its extension to continuous variational inequalities
 Journal of Optimization Theory and Applications
"... Abstract In this work we introduce a necessary natural sequential ApproximateKarushKuhnTucker (AKKT) condition for a point to be a solution of a continuous variational inequality problem without constraint qualifications, and we prove its relation with the Approximate Gradient Projection conditi ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract In this work we introduce a necessary natural sequential ApproximateKarushKuhnTucker (AKKT) condition for a point to be a solution of a continuous variational inequality problem without constraint qualifications, and we prove its relation with the Approximate Gradient Projection condition (AGP) of GárcigaOtero and Svaiter. We also prove that a slight variation of the AKKT condition is sufficient on a convex problem, and we prove sufficiency results for the AKKT condition on convex optimization problems. Sequential necessary conditions are more suitable to iterative methods than usual punctual conditions relying on constraint qualifications.
A REGULARIZED SQP METHOD WITH CONVERGENCE TO SECONDORDER OPTIMAL POINTS
, 2013
"... Regularized and stabilized sequential quadratic programming methods are two classes of sequential quadratic programming (SQP) methods designed to resolve the numerical and theoretical difficulties associated with illposed or degenerate nonlinear optimization problems. Recently, a regularized SQP me ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Regularized and stabilized sequential quadratic programming methods are two classes of sequential quadratic programming (SQP) methods designed to resolve the numerical and theoretical difficulties associated with illposed or degenerate nonlinear optimization problems. Recently, a regularized SQP method has been proposed that provides a strong connection between augmented Lagrangian methods and stabilized SQP methods. The method is formulated as a regularized SQP method with an implicit safeguarding strategy based on minimizing a boundconstrained primaldual augmented Lagrangian. Each iteration involves the solution of a regularized quadratic program (QP) that is equivalent to a strictly convex boundconstrained QP based on minimizing a quadratic model of the augmented Lagrangian. The solution of the QP subproblem defines a descent direction for a flexible line search that provides a sufficient decrease in a primaldual augmented Lagrangian merit function. Under certain conditions, the method is guaranteed to converge to a point satisfying the firstorder KarushKuhnTucker (KKT) conditions. In this paper, the regularized SQP method is extended to allow convergence to points satisfying certain secondorder KKT