Results 1 
7 of
7
Using interiorpoint methods within an outer approximation framework for mixedinteger nonlinear programming
 IMAMINLP Issue
"... Abstract. Interiorpoint methods for nonlinear programming have been demonstrated to be quite efficient, especially for large scale problems, and, as such, they are ideal candidates for solving the nonlinear subproblems that arise in the solution of mixedinteger nonlinear programming problems via o ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Interiorpoint methods for nonlinear programming have been demonstrated to be quite efficient, especially for large scale problems, and, as such, they are ideal candidates for solving the nonlinear subproblems that arise in the solution of mixedinteger nonlinear programming problems via outer approximation. However, traditionally, infeasible primaldual interiorpoint methods have had two main perceived deficiencies: (1) lack of infeasibility detection capabilities, and (2) poor performance after a warmstart. In this paper, we propose the exact primaldual penalty approach as a means to overcome these deficiencies. The generality of this approach to handle any change to the problem makes it suitable for the outer approximation framework, where each nonlinear subproblem can differ from the others in the sequence in a variety of ways. Additionally, we examine cases where the nonlinear subproblems take on special forms, namely those of secondorder cone programming problems and semidefinite programming problems. Encouraging numerical results are provided.
A MODIFIED FILTER SQP METHOD AS A TOOL FOR OPTIMAL CONTROL OF NONLINEAR SYSTEMS WITH SPATIO–TEMPORAL DYNAMICS
"... Our aim is to adapt Fletcher’s filter approach to solve optimal control problems for systems described by nonlinear Partial Differential Equations (PDEs) with state constraints. To this end, we propose a number of modifications of the filter approach, which are well suited for our purposes. Then, we ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Our aim is to adapt Fletcher’s filter approach to solve optimal control problems for systems described by nonlinear Partial Differential Equations (PDEs) with state constraints. To this end, we propose a number of modifications of the filter approach, which are well suited for our purposes. Then, we discuss possible ways of cooperation between the filter method and a PDE solver, and one of them is selected and tested.
Noname manuscript No. (will be inserted by the editor) A PenaltyInteriorPoint Algorithm for Nonlinear Constrained Optimization
, 2011
"... Abstract Penalty and interiorpoint methods for nonlinear optimization problems have enjoyed great successes for decades. Penalty methods have proved to be effective for a variety of problem classes due to their regularization effects on the constraints. They have also been shown to allow for rapid ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract Penalty and interiorpoint methods for nonlinear optimization problems have enjoyed great successes for decades. Penalty methods have proved to be effective for a variety of problem classes due to their regularization effects on the constraints. They have also been shown to allow for rapid infeasibility detection. Interiorpoint methods have become the workhorse in largescale optimization due to their Newtonlike qualities, both in terms of their scalability and convergence behavior. Each of these two strategies, however, have certain disadvantages that make their use either impractical or inefficient for certain classes of problems. The goal of this paper is to present a penaltyinteriorpoint method that possesses the advantages of penalty and interiorpoint techniques, but does not suffer from their disadvantages. Numerous attempts have been made along these lines in recent years, each with varying degrees of success. The novel feature of the algorithm in this paper is that our focus is not only on the formulation of the penaltyinteriorpoint subproblem itself, but on the design of updates for the penalty and interiorpoint parameters. The updates we propose are designed so that rapid convergence to a solution of the nonlinear optimization problem or an infeasible stationary point is attained. We motivate the convergence properties of our algorithm and illustrate its practical performance on a large set of problems, including sets of problems that exhibit degeneracy or are infeasible.
unknown title
"... convergence of a second derivative SQP method for minimizing the exact 1merit function for a fixed value of the penalty parameter. This result required the properties of a socalled Cauchy step, which was itself computed from a socalled predictor step. In addition, they allowed for the additional ..."
Abstract
 Add to MetaCart
(Show Context)
convergence of a second derivative SQP method for minimizing the exact 1merit function for a fixed value of the penalty parameter. This result required the properties of a socalled Cauchy step, which was itself computed from a socalled predictor step. In addition, they allowed for the additional computation of a variety of (optional) accelerator steps that were intended to improve the efficiency of the algorithm. The main purpose of this paper is to prove that a nonmonotone variant of the algorithm is quadratically convergent for two specific realizations of the accelerator step; this is verified with preliminary numerical results on the Hock and Schittkowski test set. Once fast local convergence is established, we consider two specific aspects of the algorithm that are important for an efficient implementation. First, we discuss a strategy for defining the positivedefinite matrix Bk used in computing the predictor step that is based on a limitedmemory BFGS update. Second, we provide a simple strategy for updating the penalty parameter based on approximately minimizing the 1penalty function over a sequence of increasing values of the penalty parameter.
Contents lists available at ScienceDirect Journal of Computational and Applied
"... journal homepage: www.elsevier.com/locate/cam A new superlinearly convergent algorithm of combining QP ..."
Abstract
 Add to MetaCart
journal homepage: www.elsevier.com/locate/cam A new superlinearly convergent algorithm of combining QP
A Sequential Quadratic . . . WITH RAPID INFEASIBILITY DETECTION
, 2012
"... We present a sequential quadratic optimization (SQO) algorithm for nonlinear constrained optimization. The method attains all of the strong global and fast local convergence guarantees of classical SQO methods, but has the important additional feature that fast local convergence is guaranteed when ..."
Abstract
 Add to MetaCart
We present a sequential quadratic optimization (SQO) algorithm for nonlinear constrained optimization. The method attains all of the strong global and fast local convergence guarantees of classical SQO methods, but has the important additional feature that fast local convergence is guaranteed when the algorithm is employed to solve infeasible instances. A twophase strategy, carefully constructed parameter updates, and a line search are employed to promote such convergence. The first phase subproblem determines the highest level of improvement in linearized feasibility that can be attained locally. The second phase subproblem then seeks optimality in such a way that the resulting search direction attains a level of improvement in linearized feasibility that is proportional to that attained in the first phase. The subproblem formulations and parameter updates ensure that near an optimal solution, the algorithm reduces to a classical SQO method for optimization, and near an infeasible stationary point, the algorithm reduces to a (perturbed) SQO method for minimizing constraint violation. Global and local convergence guarantees for the algorithm are proved under common assumptions and numerical results are presented for a large set of test problems.
Optimality properties of an Augmented Lagrangian method on infeasible problems
, 2014
"... Sometimes, the feasible set of an optimization problem that one aims to solve using a Nonlinear Programming algorithm is empty. In this case, two characteristics of the algorithm are desirable. On the one hand, the algorithm should converge to a minimizer of some infeasibility measure. On the other ..."
Abstract
 Add to MetaCart
Sometimes, the feasible set of an optimization problem that one aims to solve using a Nonlinear Programming algorithm is empty. In this case, two characteristics of the algorithm are desirable. On the one hand, the algorithm should converge to a minimizer of some infeasibility measure. On the other hand, one may wish to find a point with minimal infeasibility for which some optimality condition, with respect to the objective function, holds. Ideally, the algorithm should converge to a minimizer of the objective function subject to minimal infeasibility. In this paper the behavior of an Augmented Lagrangian algorithm with respect to those properties will be studied.