Results 1  10
of
23
Interior methods for nonlinear optimization
 SIAM REVIEW
, 2002
"... Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for ..."
Abstract

Cited by 127 (6 self)
 Add to MetaCart
(Show Context)
Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar’s widely publicized announcement in 1984 of a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.
Primaldual interior methods for nonconvex nonlinear programming
 SIAM Journal on Optimization
, 1998
"... Abstract. This paper concerns largescale general (nonconvex) nonlinear programming when first and second derivatives of the objective and constraint functions are available. A method is proposed that is based on finding an approximate solution of a sequence of unconstrained subproblems parameterize ..."
Abstract

Cited by 80 (8 self)
 Add to MetaCart
(Show Context)
Abstract. This paper concerns largescale general (nonconvex) nonlinear programming when first and second derivatives of the objective and constraint functions are available. A method is proposed that is based on finding an approximate solution of a sequence of unconstrained subproblems parameterized by a scalar parameter. The objective function of each unconstrained subproblem is an augmented penaltybarrier function that involves both primal and dual variables. Each subproblem is solved with a modified Newton method that generates search directions from a primaldual system similar to that proposed for interior methods. The augmented penaltybarrier function may be interpreted as a merit function for values of the primal and dual variables. An inertiacontrolling symmetric indefinite factorization is used to provide descent directions and directions of negative curvature for the augmented penaltybarrier merit function. A method suitable for large problems can be obtained by providing a version of this factorization that will treat large sparse indefinite systems.
An interior point algorithm for largescale nonlinear . . .
, 2002
"... Nonlinear programming (NLP) has become an essential tool in process engineering, leading to prot gains through improved plant designs and better control strategies. The rapid advance in computer technology enables engineers to consider increasingly complex systems, where existing optimization codes ..."
Abstract

Cited by 64 (3 self)
 Add to MetaCart
Nonlinear programming (NLP) has become an essential tool in process engineering, leading to prot gains through improved plant designs and better control strategies. The rapid advance in computer technology enables engineers to consider increasingly complex systems, where existing optimization codes reach their practical limits. The objective of this dissertation is the design, analysis, implementation, and evaluation of a new NLP algorithm that is able to overcome the current bottlenecks, particularly in the area of process engineering. The proposed algorithm follows an interior point approach, thereby avoiding the combinatorial complexity of identifying the active constraints. Emphasis is laid on exibility in the computation of search directions, which allows the tailoring of the method to individual applications and is mandatory for the solution of very large problems. In a fullspace version the method can be used as general purpose NLP solver, for example in modeling environments such as Ampl. The reduced space version, based on coordinate decomposition, makes it possible to tailor linear algebra
The interiorpoint revolution in optimization: history, recent developments, and lasting consequences
 BULL. AMER. MATH. SOC. (N.S
, 2005
"... Interior methods are a pervasive feature of the optimization landscape today, but it was not always so. Although interiorpoint techniques, primarily in the form of barrier methods, were widely used during the 1960s for problems with nonlinear constraints, their use for the fundamental problem of ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
(Show Context)
Interior methods are a pervasive feature of the optimization landscape today, but it was not always so. Although interiorpoint techniques, primarily in the form of barrier methods, were widely used during the 1960s for problems with nonlinear constraints, their use for the fundamental problem of linear programming was unthinkable because of the total dominance of the simplex method. During the 1970s, barrier methods were superseded, nearly to the point of oblivion, by newly emerging and seemingly more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost universally regarded as a closed chapter in the history of optimization. This picture changed dramatically in 1984, when Narendra Karmarkar announced a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have continued to transform both the theory and practice of constrained optimization. We present a condensed,
Superlinear Convergence of PrimalDual Interior Point Algorithms for Nonlinear Programming
, 2000
"... The local convergence properties of a class of primaldual interior point methods are analyzed. These methods are designed to minimize a nonlinear, nonconvex, objective function subject to linear equality constraints and general inequalities. They involve an inner iteration in which the logbarrier ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
The local convergence properties of a class of primaldual interior point methods are analyzed. These methods are designed to minimize a nonlinear, nonconvex, objective function subject to linear equality constraints and general inequalities. They involve an inner iteration in which the logbarrier merit function is approximately minimized subject to satisfying the linear equality constraints, and an outer iteration that species both the decrease in the barrier parameter and the level of accuracy for the inner minimization. It is shown that, asymptotically, for each value of the barrier parameter, solving a single primaldual linear system is enough to produce an iterate that already matches the barrier subproblem accuracy requirements. The asymptotic rate of convergence of the resulting algorithm is Qsuperlinear and may be chosen arbitrarily close to quadratic. Furthermore, this rate applies componentwise. These results hold in particular for the method described by Conn, Gould, Orb...
A feasible BFGS interior point algorithm for solving strongly convex minimization problems
 SIAM J. OPTIM
, 2000
"... We propose a BFGS primaldual interior point method for minimizing a convex function on a convex set defined by equality and inequality constraints. The algorithm generates feasible iterates and consists in computing approximate solutions of the optimality conditions perturbed by a sequence of posit ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
(Show Context)
We propose a BFGS primaldual interior point method for minimizing a convex function on a convex set defined by equality and inequality constraints. The algorithm generates feasible iterates and consists in computing approximate solutions of the optimality conditions perturbed by a sequence of positive parameters µ converging to zero. We prove that it converges qsuperlinearly for each fixed µ. We also show that it is globally convergent to the analytic center of the primaldual optimalset when µ tends to 0 and strict complementarity holds.
On the convergence of the Newton/logbarrier method
 Preprint ANL/MCSP681 0897, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, Ill
, 1997
"... Abstract. In the Newton/logbarrier method, Newton steps are taken for the logbarrier function for a xed value of the barrier parameter until a certain convergence criterion is satis ed. The barrier parameter is then decreased and the Newton process is repeated. A naive analysis indicates that Newt ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
(Show Context)
Abstract. In the Newton/logbarrier method, Newton steps are taken for the logbarrier function for a xed value of the barrier parameter until a certain convergence criterion is satis ed. The barrier parameter is then decreased and the Newton process is repeated. A naive analysis indicates that Newton's method does not exhibit superlinear convergence to the minimizer of each instance of the logbarrier function until it reaches a very small neighborhood of the minimizer. By partitioning according to the subspace of active constraint gradients, however, we show that this neighborhood is actually quite large, thus explaining why reasonably fast local convergence can be attained in practice. Moreover, we show that the overall convergence rate of the Newton/logbarrier algorithm is superlinear in the number of function/derivative evaluations, provided that the nonlinear program is formulated with a linear objective and that the schedule for decreasing the barrier parameter is related in a certain way to the convergence criterion for each Newton process. 1.
Topics in Sparse Least Squares Problems
 Linkoping University, Linkoping, Sweden, Dept. of Mathematics
, 2000
"... This thesis addresses topics in sparse least squares computation. A stable method for solving the least squares problem, min kAx; bk2 is based on the QR factorization. Here we haveaddressed the di culty for storing the orthogonal matrix Q. Using traditional methods, the number of nonzero elements in ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
This thesis addresses topics in sparse least squares computation. A stable method for solving the least squares problem, min kAx; bk2 is based on the QR factorization. Here we haveaddressed the di culty for storing the orthogonal matrix Q. Using traditional methods, the number of nonzero elements in Q makes it in many cases not feasible to store. Using the multifrontal technique when computing the QR factorization, Q may be stored and used more e ciently. A new user friendly Matlab implementation is developed. When a row in A is dense the factor R from the QR factorization may be completely dense. Therefore problems with dense rows must be treated by special techniques. The usual way to handle dense rows is to partition the problem into one sparse and one dense subproblem. The drawback with this approach is that the sparse subproblem may bemore illconditioned than the original problem or even not have a unique solution. Another method, useful for problems with few dense rows, is based on matrix stretching, where the dense rows are split into several less dense rows linked then together with new arti cial
The role of linear objective functions in barrier methods: Corrigenda
 Mathematical Programming, Series A
, 2000
"... . The published paper contains a number of typographical errors and an incomplete proof. We indicate the corrections here. Our paper [1] contains the following typographical errors. Page 364, statement of Proposition 1. Replace "Assume that (30) is satisfied: : :" by "Assume that the ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
(Show Context)
. The published paper contains a number of typographical errors and an incomplete proof. We indicate the corrections here. Our paper [1] contains the following typographical errors. Page 364, statement of Proposition 1. Replace "Assume that (30) is satisfied: : :" by "Assume that the conditions of Theorem 1 hold and that (30) is satisfied: : :". Equation (31). Replace the exponent "oe \Gamma 1" by "oe". Equation (37), second displayed line. Replace "o( 1+oe=2 )" by "O( 1+oe=2 )". Equation (47). Delete "i = q + 1; : : : ; m". Page 370, line 10. Replace "~oe ! oe" by "1 ~ oe ! oe 2". Similarly, on line 2 of Algorithm NL, replace "0 ! ~ oe ! oe" by "1 ~ oe ! oe 2". The final part of the proof of Theorem 2 is incomplete. We remedy this fault by deleting the material from line 6 on page 368 through the end of the proof, and replacing with the following. For the term in brackets, we have 1 +O( oe\Gamma1 ) (1 \Gamma )(=+ ) + +O( oe\Gamma1 )(=+ ) \Gamma 1 = \Gamma (1 \Gamma ...