Results 1  10
of
15
REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS
, 2011
"... We present the formulation and analysis of a new sequential quadratic programming (SQP) method for general nonlinearly constrained optimization. The method pairs a primaldual generalized augmented Lagrangian merit function with a flexible line search to obtain a sequence of improving estimates of t ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
We present the formulation and analysis of a new sequential quadratic programming (SQP) method for general nonlinearly constrained optimization. The method pairs a primaldual generalized augmented Lagrangian merit function with a flexible line search to obtain a sequence of improving estimates of the solution. This function is a primaldual variant of the augmented Lagrangian proposed by Hestenes and Powell in the early 1970s. A crucial feature of the method is that the QP subproblems are convex, but formed from the exact second derivatives of the original problem. This is in contrast to methods that use a less accurate quasiNewton approximation. Additional benefits of this approach include the following: (i) each QP subproblem is regularized; (ii) the QP subproblem always has a known feasible point; and (iii) a projected gradient method may be used to identify the QP active set when far from the solution.
A GLOBALLY CONVERGENT STABILIZED SQP METHOD
, 2013
"... Sequential quadratic programming (SQP) methods are a popular class of methods for nonlinearly constrained optimization. They are particularly effective for solving a sequence of related problems, such as those arising in mixedinteger nonlinear programming and the optimization of functions subject t ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Sequential quadratic programming (SQP) methods are a popular class of methods for nonlinearly constrained optimization. They are particularly effective for solving a sequence of related problems, such as those arising in mixedinteger nonlinear programming and the optimization of functions subject to differential equation constraints. Recently, there has been considerable interest in the formulation of stabilized SQP methods, which are specifically designed to handle degenerate optimization problems. Existing stabilized SQP methods are essentially local, in the sense that both the formulation and analysis focus on the properties of the methods in a neighborhood of a solution. A new SQP method is proposed that has favorable global convergence properties yet, under suitable assumptions, is equivalent to a variant of the conventional stabilized SQP method in the neighborhood of a solution. The method combines a primaldual generalized augmented Lagrangian function with a flexible line search to obtain a sequence
A NOTE ON UPPER LIPSCHITZ STABILITY, ERROR BOUNDS, AND CRITICAL MULTIPLIERS FOR LIPSCHITZCONTINUOUS KKT SYSTEMS
, 2012
"... We prove a new local upper Lipschitz stability result and the associated local error bound for solutions of parametric Karush–Kuhn–Tucker systems corresponding to variational problems with Lipschitzian base mappings and constraints possessing Lipschitzian derivatives, and without any constraint qual ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
(Show Context)
We prove a new local upper Lipschitz stability result and the associated local error bound for solutions of parametric Karush–Kuhn–Tucker systems corresponding to variational problems with Lipschitzian base mappings and constraints possessing Lipschitzian derivatives, and without any constraint qualifications. This property is equivalent to the appropriately extended to this nonsmooth setting notion of noncriticality of the Lagrange multiplier associated to the primal solution, which is weaker than secondorder sufficiency. All this extends several results previously known only for optimization problems with twice differentiable data, or assuming some constraint qualifications. In addition, our results are obtained in the more general variational setting.
GLOBAL CONVERGENCE OF AUGMENTED LAGRANGIAN METHODS APPLIED TO OPTIMIZATION PROBLEMS WITH DEGENERATE CONSTRAINTS, INCLUDING PROBLEMS WITH COMPLEMENTARITY CONSTRAINTS
, 2012
"... We consider global convergence properties of the augmented Lagrangian methods on problems with degenerate constraints, with a special emphasis on mathematical programs with complementarity constraints (MPCC). In the general case, we show convergence to stationary points of the problem under an error ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
We consider global convergence properties of the augmented Lagrangian methods on problems with degenerate constraints, with a special emphasis on mathematical programs with complementarity constraints (MPCC). In the general case, we show convergence to stationary points of the problem under an error bound condition for the feasible set (which is weaker than constraint qualifications), assuming that the iterates have some modest features of approximate local minimizers of the augmented Lagrangian. For MPCC, we first argue that even weak forms of general constraint qualifications that are suitable for convergence of the augmented Lagrangian methods, such as the recently proposed relaxed positive linear dependence condition, should not be expected to hold and thus special analysis is needed. We next obtain a rather complete picture, showing that under the usual in this context MPCClinear independence constraint qualification accumulation points of the iterates are guaranteed to be Cstationary for MPCC (better than weakly stationary), but in general need not be Mstationary (hence, neither strongly stationary). However, strong stationarity is guaranteed if the generated dual sequence is bounded, which we show to be the typical
Local convergence of the method of multipliers for variational and optimization problems under the sole noncriticality assumption. August 2013. Available at http://pages.cs.wisc.edu/˜solodov/solodov.html
"... We present local convergence analysis of the method of multipliers for equalityconstrained variational problems (in the special case of optimization, also called the augmented Lagrangian method) under the sole assumption that the dual starting point is close to a noncritical Lagrange multiplier (wh ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
We present local convergence analysis of the method of multipliers for equalityconstrained variational problems (in the special case of optimization, also called the augmented Lagrangian method) under the sole assumption that the dual starting point is close to a noncritical Lagrange multiplier (which is weaker than secondorder sufficiency). Local superlinear convergence is established under the appropriate control of the penalty parameter values. For optimization problems, we demonstrate in addition local linear convergence for sufficiently large fixed penalty parameters. Both exact and inexact versions of the method are considered. Contributions with respect to previous stateoftheart analyses for equalityconstrained problems consist in the extension to the variational setting, in using the weaker noncriticality assumption instead of the usual secondorder sufficient optimality condition, and in relaxing the smoothness requirements on the problem data. In the context of optimization problems, this gives the first local convergence results for the augmented Lagrangian method under the assumptions that do not include any constraint qualifications and are weaker than the secondorder sufficient optimality condition. We also show that the analysis under the noncriticality assumption cannot be extended to the case with inequality constraints, unless the strict complementarity condition is added (this, however, still gives a new result).
On tailored model predictive control for low cost embedded systems with memory and computational power constraints,” technical report
, 2012
"... Abstract — Even though many efficient formulations and implementations exist by now, predictive control on low cost embedded systems with constrained memory and computing power is still challenging. We present an algorithm combining Nesterov’s gradient method and the method of multipliers for linear ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract — Even though many efficient formulations and implementations exist by now, predictive control on low cost embedded systems with constrained memory and computing power is still challenging. We present an algorithm combining Nesterov’s gradient method and the method of multipliers for linear model predictive control, which can exploit the structure and does not need slack variables. Moreover, we discuss implementation issues focusing on embedded systems. We examine the performance using a benchmark example and illustrate the suitability for embedded system using a simple mechatronic system with state and input constraints and a lowcost microcontroller. I.
Adaptive Augmented Lagrangian Methods for LargeScale Equality Constrained Optimization
, 2012
"... We propose an augmented Lagrangian algorithm for solving largescale equality constrained optimization problems. The novel feature of the algorithm is an adaptive update for the penalty parameter motivated by recently proposed techniques for exact penalty methods. This adaptive updating scheme grea ..."
Abstract
 Add to MetaCart
(Show Context)
We propose an augmented Lagrangian algorithm for solving largescale equality constrained optimization problems. The novel feature of the algorithm is an adaptive update for the penalty parameter motivated by recently proposed techniques for exact penalty methods. This adaptive updating scheme greatly improves the overall performance of the algorithm without sacrificing the strengths of the core augmented Lagrangian framework, such as its attractive local convergence behavior and ability to be implemented matrixfree. This latter strength is particularly important due to interests in employing augmented Lagrangian algorithms for solving largescale optimization problems. We focus on a trust region algorithm, but also propose a line search algorithm that employs the same adaptive penalty parameter updating scheme. We provide theoretical results related to the global convergence behavior of our algorithms and illustrate by a set of numerical experiments that they outperform traditional augmented Lagrangian methods in terms of critical performance measures.
PRONEX–Optimization, and by FAPERJ.
, 2013
"... Local convergence of the method of multipliers for variational and optimization problems under the noncriticality assumption ..."
Abstract
 Add to MetaCart
(Show Context)
Local convergence of the method of multipliers for variational and optimization problems under the noncriticality assumption
Invited “Discussion Paper ” for TOP CRITICAL LAGRANGE MULTIPLIERS: WHAT WE CURRENTLY KNOW ABOUT THEM, HOW THEY SPOIL OUR LIFE, AND WHAT WE CAN DO ABOUT IT∗
, 2014
"... We discuss a certain special subset of Lagrange multipliers, called critical, which usually exist when multipliers associated to a given solution are not unique. This kind of multipliers appear to be important for a number of reasons, some understood better, some (currently) not fully. What is clear ..."
Abstract
 Add to MetaCart
(Show Context)
We discuss a certain special subset of Lagrange multipliers, called critical, which usually exist when multipliers associated to a given solution are not unique. This kind of multipliers appear to be important for a number of reasons, some understood better, some (currently) not fully. What is clear, is that Newton and Newtonrelated methods have an amazingly strong tendency to generate sequences with dual components converging to critical multipliers. This is quite striking because, typically, the set of critical multipliers is “thin ” (the set of noncritical ones is relatively open and dense, meaning that its closure is the whole set). Apart from mathematical curiosity to understand the phenomenon for something as classical as the Newton method, the attraction to critical multipliers is relevant computationally. This is because convergence to such multipliers is the reason for slow convergence of the Newton method in degenerate cases, as convergence to noncritical limits (if it were to happen) would have given the superlinear rate. Moreover, the attraction phenomenon shows up not only for the basic Newton method, but also for other related techniques (for example, quasiNewton, and the linearlyconstrained augmented Lagrangian method). In spite of clear computational
COMBINING STABILIZED SQP WITH THE AUGMENTED LAGRANGIAN ALGORITHM∗
, 2014
"... For an optimization problem with general equality and inequality constraints, we propose an algorithm which uses subproblems of the stabilized SQP (sSQP) type for approximately solving subproblems of the augmented Lagrangian method. The motivation is to take advantage of the wellknown robust behav ..."
Abstract
 Add to MetaCart
(Show Context)
For an optimization problem with general equality and inequality constraints, we propose an algorithm which uses subproblems of the stabilized SQP (sSQP) type for approximately solving subproblems of the augmented Lagrangian method. The motivation is to take advantage of the wellknown robust behavior of the augmented Lagrangian algorithm, including on problems with degenerate constraints, and at the same time try to reduce the overall algorithm locally to sSQP (which gives fast local convergence rate under weak assumptions). Specifically, the algorithm first verifies whether the primaldual sSQP step (with unit stepsize) makes good progress towards decreasing the violation of optimality conditions for the original problem, and if so, makes this step. Otherwise, the primal part of the sSQP direction is used for linesearch that decreases the augmented Lagrangian, keeping the multiplier estimate fixed for the time being. The overall algorithm has reasonable global convergence guarantees, and inherits strong convergence rate properties of sSQP under the same weak assumptions. Numerical results on degenerate problems and comparisons with some alternatives are reported. Key words: stabilized sequential quadratic programming; augmented Lagrangian; superlinear convergence; global convergence.