Results 1  10
of
33
STABILIZED SEQUENTIAL QUADRATIC PROGRAMMING FOR OPTIMIZATION AND A STABILIZED NEWTONTYPE METHOD FOR VARIATIONAL PROBLEMS WITHOUT CONSTRAINT QUALIFICATIONS
, 2007
"... The stabilized version of the sequential quadratic programming algorithm (sSQP) had been developed in order to achieve fast convergence despite possible degeneracy of constraints of optimization problems, when the Lagrange multipliers associated to a solution are not unique. Superlinear convergence ..."
Abstract

Cited by 24 (14 self)
 Add to MetaCart
(Show Context)
The stabilized version of the sequential quadratic programming algorithm (sSQP) had been developed in order to achieve fast convergence despite possible degeneracy of constraints of optimization problems, when the Lagrange multipliers associated to a solution are not unique. Superlinear convergence of sSQP had been previously established under the secondorder sufficient condition for optimality (SOSC) and the MangasarianFromovitz constraint qualification, or under the strong secondorder sufficient condition for optimality (in that case, without constraint qualification assumptions). We prove a stronger superlinear convergence result than the above, assuming SOSC only. In addition, our analysis is carried out in the more general setting of variational problems, for which we introduce a natural extension of sSQP techniques. In the process, we also obtain a new error bound for KarushKuhnTucker systems for variational problems.
On attraction of linearly constrained Lagrangian methods and of stabilized and quasiNewton SQP methods to critical multipliers
 MATHEMATICAL PROGRAMMING
, 2009
"... ..."
LOCAL CONVERGENCE OF EXACT AND INEXACT AUGMENTED LAGRANGIAN METHODS UNDER THE SECONDORDER SUFFICIENT OPTIMALITY CONDITION
, 2012
"... We establish local convergence and rate of convergence of the classical augmented Lagrangian algorithm under the sole assumption that the dual starting point is close to a multiplier satisfying the secondorder sufficient optimality condition. In particular, no constraint qualifications of any kind ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
We establish local convergence and rate of convergence of the classical augmented Lagrangian algorithm under the sole assumption that the dual starting point is close to a multiplier satisfying the secondorder sufficient optimality condition. In particular, no constraint qualifications of any kind are needed. Previous literature on the subject required, in addition, the linear independence constraint qualification and either the strict complementarity assumption or a stronger version of the secondorder sufficient condition. That said, the classical results allow the initial multiplier estimate to be far from the optimal one, at the expense of proportionally increasing the threshold value for the penalty parameters. Although our primary goal is to avoid constraint qualifications, if the stronger assumptions are introduced, then starting points far from the optimal multiplier are allowed within our analysis as well. Using only the secondorder sufficient optimality condition, for penalty parameters large enough we prove primaldual Qlinear convergence rate, which becomes superlinear if the parameters are allowed to go to infinity. Both exact and inexact solutions of subproblems are considered. In the exact case, we further show that the primal convergence rate is of the same Qorder as the primaldual rate. Previous assertions for the primal sequence all had to do with the weaker Rrate of convergence and required the stronger assumptions cited above. Finally, we show that under our assumptions one of the popular rules of controlling the penalty parameters ensures their boundedness.
INEXACT JOSEPHY–NEWTON FRAMEWORK FOR GENERERALIZED EQUATIONS AND ITS APPLICATIONS TO LOCAL ANALYSIS OF NEWTONIAN METHODS FOR CONSTRAINED OPTIMIZATION ∗
, 2008
"... We propose and analyze a perturbed version of the classical JosephyNewton method for solving generalized equations. This perturbed framework is convenient to treat in a unified way standard sequential quadratic programming, its stabilzed version, sequential quadratically constrained quadratic progr ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
(Show Context)
We propose and analyze a perturbed version of the classical JosephyNewton method for solving generalized equations. This perturbed framework is convenient to treat in a unified way standard sequential quadratic programming, its stabilzed version, sequential quadratically constrained quadratic programming, and linearly constrained Lagrangian methods. For the linearly constrained Lagrangian methods, in particular, we obtain superlinear convergence under the secondorder sufficient optimality condition and the strict Mangasarian–Fromovitz constraint qualification, while previous results in the literature assume (in addition to secondorder sufficiency) the stronger linear independence constraint qualification as well as the strict complementarity condition. For the sequential quadratically constrained quadratic programming methods, we prove primaldual superlinear/quadratic convergence under the same assumptions as above, which also gives a new result.
A class of activeset Newton methods for mixed complementarity problems
 SIAM J. OPTIM
, 2004
"... Based on the identification of indices active at a solution of the mixed complementarity problem (MCP), we propose a class of Newton methods for which local superlinear convergence holds under extremely mild assumptions. In particular, the error bound condition needed for the identification procedur ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
Based on the identification of indices active at a solution of the mixed complementarity problem (MCP), we propose a class of Newton methods for which local superlinear convergence holds under extremely mild assumptions. In particular, the error bound condition needed for the identification procedure and the nondegeneracy condition needed for the convergence of the resulting Newton method are individually and collectively strictly weaker than the property of semistability of a solution. Thus the local superlinear convergence conditions of the presented method are weaker than conditions required for the semismooth (generalized) Newton methods applied to MCP reformulations. Moreover, they are also weaker than convergence conditions of the linearization (Josephy–Newton) method. For the special case of optimality systems with primaldual structure, we further consider the question of superlinear convergence of primal variables. We illustrate our theoretical results with numerical experiments on some specially constructed MCPs whose solutions do not satisfy the usual regularity assumptions.
SHARP PRIMAL SUPERLINEAR CONVERGENCE RESULTS FOR SOME NEWTONIAN METHODS FOR CONSTRAINED OPTIMIZATION
, 2009
"... As is well known, superlinear or quadratic convergence of the primaldual sequence generated by an optimization algorithm does not, in general, imply superlinear convergence of the primal part. Primal convergence, however, is often of particular interest. For the sequential quadratic programming (SQ ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
(Show Context)
As is well known, superlinear or quadratic convergence of the primaldual sequence generated by an optimization algorithm does not, in general, imply superlinear convergence of the primal part. Primal convergence, however, is often of particular interest. For the sequential quadratic programming (SQP) algorithm, local primaldual quadratic convergence can be established under the assumptions of uniqueness of the Lagrange multiplier associated to the solution and the secondorder sufficient condition. At the same time, previous primal superlinear convergence results for SQP required to strengthen the first assumption to the linear independence constraint qualification. In this paper, we show that this strengthening of assumptions is actually not necessary. Specifically, we show that once primaldual convergence is assumed or already established, for primal superlinear rate one only needs a certain error bound estimate. This error bound holds, for example, under the secondorder sufficient condition, which is needed for primaldual local analysis in any case. Moreover, in some situations even secondorder sufficiency can be relaxed to the weaker assumption that the multiplier in question is noncritical. Our study is performed for a rather general perturbed SQP framework, which covers in addition to SQP and quasiNewton SQP some other algorithms as well. For example, as a byproduct,
A NOTE ON UPPER LIPSCHITZ STABILITY, ERROR BOUNDS, AND CRITICAL MULTIPLIERS FOR LIPSCHITZCONTINUOUS KKT SYSTEMS
, 2012
"... We prove a new local upper Lipschitz stability result and the associated local error bound for solutions of parametric Karush–Kuhn–Tucker systems corresponding to variational problems with Lipschitzian base mappings and constraints possessing Lipschitzian derivatives, and without any constraint qual ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
(Show Context)
We prove a new local upper Lipschitz stability result and the associated local error bound for solutions of parametric Karush–Kuhn–Tucker systems corresponding to variational problems with Lipschitzian base mappings and constraints possessing Lipschitzian derivatives, and without any constraint qualifications. This property is equivalent to the appropriately extended to this nonsmooth setting notion of noncriticality of the Lagrange multiplier associated to the primal solution, which is weaker than secondorder sufficiency. All this extends several results previously known only for optimization problems with twice differentiable data, or assuming some constraint qualifications. In addition, our results are obtained in the more general variational setting.
GLOBAL CONVERGENCE OF AUGMENTED LAGRANGIAN METHODS APPLIED TO OPTIMIZATION PROBLEMS WITH DEGENERATE CONSTRAINTS, INCLUDING PROBLEMS WITH COMPLEMENTARITY CONSTRAINTS
, 2012
"... We consider global convergence properties of the augmented Lagrangian methods on problems with degenerate constraints, with a special emphasis on mathematical programs with complementarity constraints (MPCC). In the general case, we show convergence to stationary points of the problem under an error ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
We consider global convergence properties of the augmented Lagrangian methods on problems with degenerate constraints, with a special emphasis on mathematical programs with complementarity constraints (MPCC). In the general case, we show convergence to stationary points of the problem under an error bound condition for the feasible set (which is weaker than constraint qualifications), assuming that the iterates have some modest features of approximate local minimizers of the augmented Lagrangian. For MPCC, we first argue that even weak forms of general constraint qualifications that are suitable for convergence of the augmented Lagrangian methods, such as the recently proposed relaxed positive linear dependence condition, should not be expected to hold and thus special analysis is needed. We next obtain a rather complete picture, showing that under the usual in this context MPCClinear independence constraint qualification accumulation points of the iterates are guaranteed to be Cstationary for MPCC (better than weakly stationary), but in general need not be Mstationary (hence, neither strongly stationary). However, strong stationarity is guaranteed if the generated dual sequence is bounded, which we show to be the typical
A survey of GNE computation methods: theory and algorithms
, 2013
"... This paper deals with optimization methods solving the generalized Nash equilibrium problem (GNEP), which extends the standard Nash problem by allowing constraints. Two cases are considered: general GNEPs where constraint functions are individualized and jointly convex GNEPs where there is a common ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
This paper deals with optimization methods solving the generalized Nash equilibrium problem (GNEP), which extends the standard Nash problem by allowing constraints. Two cases are considered: general GNEPs where constraint functions are individualized and jointly convex GNEPs where there is a common constraint function. Most recent methods are benchmarked against new methods. Numerical illustrations are proposed with the same software for a fair benchmark.
Stabilized SQP revisited
 MATH. PROGRAM., SER. A
, 2010
"... The stabilized version of the sequential quadratic programming algorithm (sSQP) had been developed in order to achieve superlinear convergence in situations when the Lagrange multipliers associated to a solution are not unique. Within the framework of Fischer (Math Program 94:91–124, 2002), the key ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
The stabilized version of the sequential quadratic programming algorithm (sSQP) had been developed in order to achieve superlinear convergence in situations when the Lagrange multipliers associated to a solution are not unique. Within the framework of Fischer (Math Program 94:91–124, 2002), the key to local superlinear convergence of sSQP are the following two properties: upper Lipschitzian behavior of solutions of the KarushKuhnTucker (KKT) system under canonical perturbations and local solvability of sSQP subproblems with the associated primaldual step being of the order of the distance from the current iterate to the solution set of the unperturbed KKT system. According to Fernández and Solodov (Math Program 125:47–73, 2010), both of these properties are ensured by the secondorder sufficient optimality condition (SOSC) without any constraint qualification assumptions. In this paper, we state precise relationships between the upper Lipschitzian property of solutions of KKT systems, error bounds for KKT systems, the notion of critical Lagrange multipliers (a subclass of multipliers that violate SOSC in a very special way), the secondorder necessary condition for optimality, and solvability of sSQP subproblems. Moreover,