Results 1  10
of
40
Interior methods for nonlinear optimization
 SIAM REVIEW
, 2002
"... Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for ..."
Abstract

Cited by 120 (5 self)
 Add to MetaCart
(Show Context)
Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar’s widely publicized announcement in 1984 of a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.
A Pattern Search Filter Method for Nonlinear Programming without Derivatives
 SIAM Journal on Optimization
, 2000
"... : This paper presents and analyzes a pattern search method for general constrained optimization based on filter methods for step acceptance. Roughly, a filter method accepts a step that either improves the objective function value or the value of some function that measures the constraint violation. ..."
Abstract

Cited by 68 (12 self)
 Add to MetaCart
(Show Context)
: This paper presents and analyzes a pattern search method for general constrained optimization based on filter methods for step acceptance. Roughly, a filter method accepts a step that either improves the objective function value or the value of some function that measures the constraint violation. The new algorithm does not compute or approximate any derivatives, penalty constants or Lagrange multipliers. It reduces trivially to the Torczon GPS (generalized pattern search) algorithm when there are no constraints, and indeed, it is formulated here to reduce to the version of GPS designed to handle finitely many linear constraints if they are treated explicitly. A key feature is that it preserves the useful division into search and poll steps. Assuming local smoothness, the algorithm produces a KKT point for a problem related to the original problem. Key words Pattern search algorithm, filter algorithm, surrogatebased optimization, derivativefree convergence analysis, constrained op...
Filter Pattern Search Algorithms for Mixed Variable Constrained Optimization Problems
 SIAM Journal on Optimization
, 2004
"... A new class of algorithms for solving nonlinearly constrained mixed variable optimization problems is presented. This class combines and extends the AudetDennis Generalized Pattern Search (GPS) algorithms for bound constrained mixed variable optimization, and their GPSfilter algorithms for gene ..."
Abstract

Cited by 51 (6 self)
 Add to MetaCart
(Show Context)
A new class of algorithms for solving nonlinearly constrained mixed variable optimization problems is presented. This class combines and extends the AudetDennis Generalized Pattern Search (GPS) algorithms for bound constrained mixed variable optimization, and their GPSfilter algorithms for general nonlinear constraints. In generalizing existing algorithms, new theoretical convergence results are presented that reduce seamlessly to existing results for more specific classes of problems. While no local continuity or smoothness assumptions are required to apply the algorithm, a hierarchy of theoretical convergence results based on the Clarke calculus is given, in which local smoothness dictate what can be proved about certain limit points generated by the algorithm. To demonstrate the usefulness of the algorithm, the algorithm is applied to the design of a loadbearing thermal insulation system. We believe this is the first algorithm with provable convergence results to directly target this class of problems.
A Globally Convergent PrimalDual InteriorPoint Filter Method for Nonlinear Programming
, 2002
"... In this paper, the filter technique of Fletcher and Leyffer (1997) is used to globalize the primaldual interiorpoint algorithm for nonlinear programming, avoiding the use of merit functions and the updating of penalty parameters. The new algorithm decomposes the primaldual step obtained from the p ..."
Abstract

Cited by 49 (4 self)
 Add to MetaCart
In this paper, the filter technique of Fletcher and Leyffer (1997) is used to globalize the primaldual interiorpoint algorithm for nonlinear programming, avoiding the use of merit functions and the updating of penalty parameters. The new algorithm decomposes the primaldual step obtained from the perturbed firstorder necessary conditions into a normal and a tangential step, whose sizes are controlled by a trustregion type parameter. Each entry in the filter is a pair of coordinates: one resulting from feasibility and centrality, and associated with the normal step; the other resulting from optimality (complementarity and duality), and related with the tangential step. Global convergence to firstorder critical points is proved for the new primaldual interiorpoint filter algorithm.
Nonmonotone Trust Region Methods for Nonlinear Equality Constrained Optimization without a Penalty Function
 MATH. PROGRAM., SER. B
, 2000
"... We propose and analyze a class of penaltyfunctionfree nonmonotone trustregion methods for nonlinear equality constrained optimization problems. The algorithmic framework yields global convergence without using a merit function and allows nonmonotonicity independently for both, the constraint viol ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
We propose and analyze a class of penaltyfunctionfree nonmonotone trustregion methods for nonlinear equality constrained optimization problems. The algorithmic framework yields global convergence without using a merit function and allows nonmonotonicity independently for both, the constraint violation and the value of the Lagrangian function. Similar to the ByrdOmojokun class of algorithms, each step is composed of a quasinormal and a tangential step. Both steps are required to satisfy a decrease condition for their respective trustregion subproblems. The proposed mechanism for accepting steps combines nonmonotone decrease conditions on the constraint violation and/or the Lagrangian function, which leads to a flexibility and acceptance behavior comparable to filterbased methods. We establish the global convergence of the method. Furthermore, transition to quadratic local convergence is proved. Numerical tests are presented that confirm the robustness and efficiency of the approach.
On the Superlinear Local Convergence of a FilterSQP Method
, 2002
"... Transition to superlinear local convergence is shown for a modified version of the trustregion filterSQP method for nonlinear programming introduced by Fletcher, Leyffer, and Toint [8]. Hereby, the original trustregion SQPsteps can be used without an additional second order correction. The main ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Transition to superlinear local convergence is shown for a modified version of the trustregion filterSQP method for nonlinear programming introduced by Fletcher, Leyffer, and Toint [8]. Hereby, the original trustregion SQPsteps can be used without an additional second order correction. The main modification consists in using the Lagrangian function value instead of the objective function value in the filter together with an appropriate infeasibility measure. Moreover, it is shown that the modified trustregion filterSQP method has the same global convergence properties as the original algorithm in [8].
InteriorPoint l_2Penalty Methods for Nonlinear Programming with Strong Global Convergence Properties
 Math. Programming
, 2004
"... We propose two line search primaldual interiorpoint methods that have a generic barrierSQP outer structure and approximately solve a sequence of equality constrained barrier subproblems. To enforce convergence for each subproblem, these methods use an # 2 exact penalty function eliminating the n ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We propose two line search primaldual interiorpoint methods that have a generic barrierSQP outer structure and approximately solve a sequence of equality constrained barrier subproblems. To enforce convergence for each subproblem, these methods use an # 2 exact penalty function eliminating the need to drive the corresponding penalty parameter to infinity when finite multipliers exist. Instead of directly decreasing an equality constraint infeasibility measure, these methods attain feasibility by forcing this measure to zero whenever the steps generated by the methods tend to zero. Our analysis shows that under standard assumptions, our methods have strong global convergence properties. Specifically, we show that if the penalty parameter remains bounded, any limit point of the iterate sequence is either a KKT point of the barrier subproblem, or a FritzJohn (FJ) point of the original problem that fails to satisfy the MangasarianFromovitz constraint qualification (MFCQ); if the penalty parameter tends to infinity, there is a limit point that is either an infeasible FJ point of the inequality constrained feasibility problem (an infeasible stationary point of the infeasibility measure if slack variables are added) or a FJ point of the original problem at which the MFCQ fails to hold. Numerical results are given that illustrate these outcomes.