Results 1  10
of
14
Interior methods for nonlinear optimization
 SIAM REVIEW
, 2002
"... Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for ..."
Abstract

Cited by 127 (6 self)
 Add to MetaCart
(Show Context)
Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar’s widely publicized announcement in 1984 of a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.
R.J.: Interiorpoint methods for nonconvex nonlinear programming: orderings and higherorder methods
 Mathematical Programming Ser. B
, 2000
"... Abstract. In this paper, we present the formulation and solution of optimization problems with complementarity constraints using an interiorpoint method for nonconvex nonlinear programming. We identify possible difficulties that could arise, such as unbounded faces of dual variables, linear depend ..."
Abstract

Cited by 117 (8 self)
 Add to MetaCart
Abstract. In this paper, we present the formulation and solution of optimization problems with complementarity constraints using an interiorpoint method for nonconvex nonlinear programming. We identify possible difficulties that could arise, such as unbounded faces of dual variables, linear dependence of constraint gradients and initialization issues. We suggest remedies. We include encouraging numerical results on the MacMPEC test suite of problems.
A PRIMALDUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION
, 2003
"... This paper concerns general (nonconvex) nonlinear optimization when first and second derivatives of the objective and constraint functions are available. The proposed method is based on finding an approximate solution of a sequence of unconstrained subproblems parameterized by a scalar parameter. T ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
This paper concerns general (nonconvex) nonlinear optimization when first and second derivatives of the objective and constraint functions are available. The proposed method is based on finding an approximate solution of a sequence of unconstrained subproblems parameterized by a scalar parameter. The objective function of each unconstrained subproblem is an augmented penaltybarrier function that involves both primal and dual variables. Each subproblem is solved using a secondderivative Newtontype method that employs a combined trust region and line search strategy to ensure global convergence. It is shown that the trustregion step can be computed by factorizing a sequence of systems with diagonallymodified primaldual structure, where the inertia of these systems can be determined without recourse to a special factorization method. This has the benefit that offtheshelf linear system software can be used at all times, allowing the straightforward extension to largescale problems. Numerical results are given for problems in the COPS test collection.
Iterative solution of augmented systems arising in interior methods
 SIAM JOURNAL ON OPTIMIZATION
, 2007
"... Iterative methods are proposed for certain augmented systems of linear equations that arise in interior methods for general nonlinear optimization. Interior methods define a sequence of KKT equations that represent the symmetrized (but indefinite) equations associated with Newton’s method for a po ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
(Show Context)
Iterative methods are proposed for certain augmented systems of linear equations that arise in interior methods for general nonlinear optimization. Interior methods define a sequence of KKT equations that represent the symmetrized (but indefinite) equations associated with Newton’s method for a point satisfying the perturbed optimality conditions. These equations involve both the primal and dual variables and become increasingly illconditioned as the optimization proceeds. In this context, an iterative linear solver must not only handle the illconditioning but also detect the occurrence of KKT matrices with the wrong matrix inertia. A oneparameter family of equivalent linear equations is formulated that includes the KKT system as a special case. The discussion focuses on a particular system from this family, known as the “doubly augmented system, ” that is positive definite with respect to both the primal and dual variables. This property means that a standard preconditioned conjugategradient method involving both primal and dual variables will either terminate successfully or detect if the KKT matrix has the wrong inertia. Constraint preconditioning is a wellknown technique for preconditioning the conjugategradient method on augmented systems. A family of constraint preconditioners is proposed that provably eliminates the inherent illconditioning in the augmented system. A considerable benefit of combining constraint preconditioning with the doubly augmented system is that the preconditioner need not be applied exactly. Two particular “activese ” constraint preconditioners are formulated that involve only a subset of the rows of the augmented system and thereby may be applied with considerably less work. Finally, some numerical experiments illustrate the numerical performance of the proposed preconditioners and highlight some theoretical properties of the preconditioned matrices.
A PRIMALDUAL AUGMENTED LAGRANGIAN
, 2008
"... Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we discuss the formulation of subproblems in which the objective is a primaldual generalization of the HestenesPowell augmented Lagrangi ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we discuss the formulation of subproblems in which the objective is a primaldual generalization of the HestenesPowell augmented Lagrangian function. This generalization has the crucial feature that it is minimized with respect to both the primal and the dual variables simultaneously. A benefit of this approach is that the quality of the dual variables is monitored explicitly during the solution of the subproblem. Moreover, each subproblem may be regularized by imposing explicit bounds on the dual variables. Two primaldual variants of conventional primal methods are proposed: a primaldual bound constrained Lagrangian (pdBCL) method and a primaldual ℓ1 linearly constrained Lagrangian (pdℓ1LCL) method.
REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS
, 2011
"... We present the formulation and analysis of a new sequential quadratic programming (SQP) method for general nonlinearly constrained optimization. The method pairs a primaldual generalized augmented Lagrangian merit function with a flexible line search to obtain a sequence of improving estimates of t ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
We present the formulation and analysis of a new sequential quadratic programming (SQP) method for general nonlinearly constrained optimization. The method pairs a primaldual generalized augmented Lagrangian merit function with a flexible line search to obtain a sequence of improving estimates of the solution. This function is a primaldual variant of the augmented Lagrangian proposed by Hestenes and Powell in the early 1970s. A crucial feature of the method is that the QP subproblems are convex, but formed from the exact second derivatives of the original problem. This is in contrast to methods that use a less accurate quasiNewton approximation. Additional benefits of this approach include the following: (i) each QP subproblem is regularized; (ii) the QP subproblem always has a known feasible point; and (iii) a projected gradient method may be used to identify the QP active set when far from the solution.
CONVERGENCE ANALYSIS OF AN INTERIORPOINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING
, 2007
"... In this paper, we present global and local convergence results for an interiorpoint method for nonlinear programming. The algorithm uses an ℓ1 penalty approach to relax all constraints, to provide regularization, and to bound the Lagrange multipliers. The penalty problems are solved using a simpli ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
In this paper, we present global and local convergence results for an interiorpoint method for nonlinear programming. The algorithm uses an ℓ1 penalty approach to relax all constraints, to provide regularization, and to bound the Lagrange multipliers. The penalty problems are solved using a simplified version of Chen and Goldfarb’s strictly feasible interiorpoint method [6]. The global convergence of the algorithm is proved under mild assumptions, and local analysis shows that it converges Qquadratically. The proposed approach improves on existing results in several ways: (1) the convergence analysis does not assume boundedness of dual iterates, (2) local convergence does not require the Linear Independence Constraint Qualification, (3) the solution of the penalty problem is shown to locally converge to optima that may not satisfy the KarushKuhnTucker conditions, and (4) the algorithm is applicable to mathematical programs with equilibrium constraints.
Mathematical programs with complementarity constraints: Convergence properties of a smoothing method
 Mathematics of Operations Research
, 2007
"... Abstract. In the present paper, optimization problems P with complementarity constraints are considered. Characterizations for local minimizers x of P of order one and two are presented. We analyze a parametric smoothing approach for solving these programs in which P is replaced by a perturbed probl ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In the present paper, optimization problems P with complementarity constraints are considered. Characterizations for local minimizers x of P of order one and two are presented. We analyze a parametric smoothing approach for solving these programs in which P is replaced by a perturbed problem P fi depending on a (small) parameter fi. We are interested in the convergence behavior of the feasible set F fi and the convergence of the solutions x fi of P fi for fi → 0. In particular, it is shown, that under generic assumptions the solutions x fi are unique and converge to a solution x of P with a rate O. fi/. Moreover, the convergence for the Hausdorff distance d.F fi;F / between the feasible sets of P fi and P is of order O. fi/.
Comput Optim Appl A primaldual augmented Lagrangian
"... Abstract Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we consider the formulation of subproblems in which the objective function is a generalization of the HestenesPowell augmented ..."
Abstract
 Add to MetaCart
Abstract Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we consider the formulation of subproblems in which the objective function is a generalization of the HestenesPowell augmented Lagrangian function. The main feature of the generalized function is that it is minimized with respect to both the primal and the dual variables simultaneously. The benefits of this approach include: (i) the ability to control the quality of the dual variables during the solution of the subproblem; (ii) the availability of improved dual estimates on early termination of the subproblem; and (iii) the ability to regularize the subproblem by imposing explicit bounds on the dual variables. We propose two primaldual variants of conventional primal methods: a primaldual bound constrained Lagrangian (pdBCL) method and a primaldual 1 linearly constrained Lagrangian (pd 1 LCL) method. Finally, a new sequential quadratic programming (pdSQP) method is proposed that uses the primaldual augmented Lagrangian as a merit function.
A Note on Error Estimates for some Interior Penalty Methods ⋆
"... Summary. We consider the interior penalty methods based on the logarithmic and inverse barriers. Under the MangasarianFromovitz constraint qualification and appropriate growth conditions on the objective function, we derive computable estimates for the distance from the subproblem solution to the s ..."
Abstract
 Add to MetaCart
(Show Context)
Summary. We consider the interior penalty methods based on the logarithmic and inverse barriers. Under the MangasarianFromovitz constraint qualification and appropriate growth conditions on the objective function, we derive computable estimates for the distance from the subproblem solution to the solution of the original problem. Some of those estimates are shown to be sharp. 1