Results 1  10
of
39
Global and superlinear convergence of the smoothing Newton method and its application to general box constrained variational inequalities
 Mathematics of Computation
, 1998
"... Abstract. The smoothing Newton method for solving a system of nonsmooth equations F (x) = 0, which may arise from the nonlinear complementarity problem, the variational inequality problem or other problems, can be regarded as a variant of the smoothing method. At the kth step, the nonsmooth functio ..."
Abstract

Cited by 54 (19 self)
 Add to MetaCart
(Show Context)
Abstract. The smoothing Newton method for solving a system of nonsmooth equations F (x) = 0, which may arise from the nonlinear complementarity problem, the variational inequality problem or other problems, can be regarded as a variant of the smoothing method. At the kth step, the nonsmooth function F is approximated by a smooth function f(·,εk), and the derivative of f(·,εk) at x k is used as the Newton iterative matrix. The merits of smoothing methods and smoothing Newton methods are global convergence and convenience in handling. In this paper, we show that the smoothing Newton method is also superlinearly convergent if F is semismooth at the solution and f satisfies a Jacobian consistency property. We show that most common smooth functions, such as the GabrielMoré function, have this property. As an application, we show that for box constrained variational inequalities if the involved function is P –uniform, the iteration sequence generated by the smoothing Newton method will converge to the unique solution of the problem globally and superlinearly (quadratically). 1.
On the constant positive linear dependence condition and its application to SQP methods
 SIAM Journal on Optimization
, 2000
"... Abstract. In this paper, we introduce a constant positive linear dependence condition (CPLD), which is weaker than the Mangasarian–Fromovitz constraint qualification (MFCQ) and the constant rank constraint qualification (CRCQ). We show that a limit point of a sequence of approximating Karush–Kuhn–Tu ..."
Abstract

Cited by 50 (3 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we introduce a constant positive linear dependence condition (CPLD), which is weaker than the Mangasarian–Fromovitz constraint qualification (MFCQ) and the constant rank constraint qualification (CRCQ). We show that a limit point of a sequence of approximating Karush–Kuhn–Tucker (KKT) points is a KKT point if the CPLD holds there. We show that a KKT point satisfying the CPLD and the strong secondorder sufficiency conditions (SSOSC) is an isolated KKT point. We then establish convergence of a general sequential quadratical programming (SQP) method under the CPLD and the SSOSC. Finally, we apply these results to analyze the feasible SQP method proposed by Panier and Tits in 1993 for inequality constrained optimization problems. We establish its global convergence under the SSOSC and a condition slightly weaker than the Mangasarian–Fromovitz constraint qualification, and we prove superlinear convergence of a modified version of this algorithm under the SSOSC and a condition slightly weaker than the linear independence constraint qualification.
On Solving Mathematical Programs With Complementarity Constraints As Nonlinear Programs
, 2002
"... . We investigate the possibility of solving mathematical programs with complementarity constraints (MPCCs) using classical algorithms and procedures from nonlinear programming. Although MPCCs do not satisfy a constraint qualification, we establish sufficient conditions for their Lagrange multiplier ..."
Abstract

Cited by 41 (2 self)
 Add to MetaCart
. We investigate the possibility of solving mathematical programs with complementarity constraints (MPCCs) using classical algorithms and procedures from nonlinear programming. Although MPCCs do not satisfy a constraint qualification, we establish sufficient conditions for their Lagrange multiplier set to be nonempty in two different formulations. MPCCs that have nonempty Lagrange multiplier sets and that satisfy the quadratic growth condition can be approached by the elastic mode with a boundedpenalty parameter. This transformsthe MPCC into a nonlinear program with additional variables that has an isolated stationary point and local minimum at the solution of the original problem, which in turn makes it approachable by a sequential quadratic programming algorithm. The robustness of the elastic mode when applied to MPCCs is demonstrated by several numerical examples. 1. Introduction. Complementarity constraints can be used to model numerous economics or mechanics applications [18, 25]....
Modifying SQP for degenerate problems
 Preprint ANL/MCSP6991097, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, Ill
, 1997
"... Abstract. Most local convergence analyses of the sequential quadratic programming (SQP) algorithm for nonlinear programming make strong assumptions about the solution, namely, that the active constraint gradients are linearly independent and that there are no weakly active constraints. In this paper ..."
Abstract

Cited by 38 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Most local convergence analyses of the sequential quadratic programming (SQP) algorithm for nonlinear programming make strong assumptions about the solution, namely, that the active constraint gradients are linearly independent and that there are no weakly active constraints. In this paper, we establish a framework for variants of SQP that retain the characteristic superlinear convergence rate even when these assumptions are relaxed, proving general convergence results and placing some recently proposed SQP variants in this framework. We discuss the reasons for which implementations of SQP often continue to exhibit good local convergence behavior even when the assumptions commonly made in the analysis are violated. Finally, we describe a new algorithm that formalizes and extends standard SQP implementation techniques, and we prove convergence results for this method also. AMS subject classifications. 90C33, 90C30, 49M45 1. Introduction. We
Local behavior of an iterative framework for generalized equations with nonisolated solutions
 MATH. PROGRAM., SER. A
, 2002
"... ..."
Modified Wilson's Method For Nonlinear Programs With Nonunique Multipliers
, 1999
"... this paper we deal with arbitrary nonlinear constraint functions. We first present a general framework for obtaining superlinear convergence of Newtontype methods for generalized equations with compact solution sets. Then our main aim is to show how this framework can be applied to the KarushKuhnT ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
this paper we deal with arbitrary nonlinear constraint functions. We first present a general framework for obtaining superlinear convergence of Newtontype methods for generalized equations with compact solution sets. Then our main aim is to show how this framework can be applied to the KarushKuhnTucker system and to derive conditions that imply local qquadratic convergence of a Modified Wilson Method but not the uniqueness of the multiplier vector. This rate of convergence will be shown for the distances of the iterates to the set of KKT points. Josephy [8] proved that Newton's method for generalized equations converges locally
Constraint identification and algorithm stabilization for degenerate nonlinear programs
 Mathematical Programming
, 2003
"... Abstract. In the vicinity of a solution of a nonlinear programming problem at which both strict complementarity and linear independence of the active constraints may fail to hold, we describe a technique for distinguishing weakly active from strongly active constraints. We show that this information ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In the vicinity of a solution of a nonlinear programming problem at which both strict complementarity and linear independence of the active constraints may fail to hold, we describe a technique for distinguishing weakly active from strongly active constraints. We show that this information can be used to modify the sequential quadratic programming algorithm so that it exhibits superlinear convergence to the solution under assumptions weaker than those made in previous analyses.
Degenerate Nonlinear Programming with a Quadratic Growth Condition
 Preprint ANL/MCSP7610699, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, Ill
"... . We show that the quadratic growth condition and the MangasarianFromovitz constraint qualification imply that local minima of nonlinear programs are isolated stationary points. As a result, when started sufficiently close to such points, an L1 exact penalty sequential quadratic programming algorit ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
(Show Context)
. We show that the quadratic growth condition and the MangasarianFromovitz constraint qualification imply that local minima of nonlinear programs are isolated stationary points. As a result, when started sufficiently close to such points, an L1 exact penalty sequential quadratic programming algorithm will induce at least Rlinear convergence of the iterates to such a local minimum. We construct an example of a degenerate nonlinear program with a unique local minimum satisfying the quadratic growth and the MangasarianFromovitz constraint qualification but for which no positive semidefinite augmented Lagrangian exists. We present numerical results obtained using several nonlinear programming packages on this example, and discuss its implications for some algorithms. 1. Introduction. Recently, there has been renewed interest in analyzing and modifying sequential quadratic programming (SQP) algorithms for constrained nonlinear optimization for cases where the traditional regularity cond...
A Superlinearly Convergent Sequential Quadratically Constrained Quadratic Programming Algorithm For Degenerate Nonlinear Programming
 SIAM Journal on Optimization
"... . We present an algorithm that achieves superlinear convergence for nonlinear programs satisfying the MangasarianFromovitz constraint qualification and the quadratic growth condition. This convergence result is obtained despite the potential lack of a locally convex augmented Lagrangian. The algori ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
(Show Context)
. We present an algorithm that achieves superlinear convergence for nonlinear programs satisfying the MangasarianFromovitz constraint qualification and the quadratic growth condition. This convergence result is obtained despite the potential lack of a locally convex augmented Lagrangian. The algorithm solves a succession of subproblems that have quadratic objective and quadratic constraints, both possibly nonconvex. By the use of a trustregion constraint we guarantee that any stationary point of the subproblem induces superlinear convergence which avoids the problem of computing a global minimum. 1. Introduction. Recently, there has been renewed interest in analyzing and modifying the algorithms for constrained nonlinear optimization for cases where the traditional regularity conditions do not hold [5, 12, 11, 20, 24, 23]. This research has been motivated by the fact that largescale nonlinear programming problems tend to be almost degenerate (have large condition numbers for the Jac...