Results 1  10
of
12
ON MUTUAL IMPACT OF NUMERICAL LINEAR ALGEBRA AND LARGESCALE OPTIMIZATION WITH FOCUS ON INTERIOR POINT METHODS
, 2008
"... ..."
A Numerical Study of ActiveSet and InteriorPoint Methods for Bound Constrained Optimization ⋆
"... Summary. This papers studies the performance of several interiorpoint and activeset methods on bound constrained optimization problems. The numerical tests show that the sequential linearquadratic programming (SLQP) method is robust, but is not as effective as gradient projection at identifying th ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Summary. This papers studies the performance of several interiorpoint and activeset methods on bound constrained optimization problems. The numerical tests show that the sequential linearquadratic programming (SLQP) method is robust, but is not as effective as gradient projection at identifying the optimal active set. Interiorpoint methods are robust and require a small number of iterations and function evaluations to converge. An analysis of computing times reveals that it is essential to develop improved preconditioners for the conjugate gradient iterations used in SLQP and interiorpoint methods. The paper discusses how to efficiently implement incomplete Cholesky preconditioners and how to eliminate illconditioning caused by the barrier approach. The paper concludes with an evaluation of methods that use quasiNewton approximations to the Hessian of the Lagrangian. 1
PRECONDITIONED EIGENSOLVERS FOR LARGESCALE NONLINEAR HERMITIAN EIGENPROBLEMS WITH VARIATIONAL CHARACTERIZATIONS. I. EXTREME EIGENVALUES∗
, 2014
"... Abstract. Efficient computation of extreme eigenvalues of largescale linear Hermitian eigenproblems can be achieved by preconditioned conjugate gradient (PCG) methods. In this paper, we study PCG methods for computing extreme eigenvalues of nonlinear Hermitian eigenproblems of the form T (λ)v = 0 t ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Efficient computation of extreme eigenvalues of largescale linear Hermitian eigenproblems can be achieved by preconditioned conjugate gradient (PCG) methods. In this paper, we study PCG methods for computing extreme eigenvalues of nonlinear Hermitian eigenproblems of the form T (λ)v = 0 that admit a nonlinear variational principle. We investigate some theoretical properties of a basic CG method, including its global and asymptotic convergence. We propose several variants of singlevector and block PCG methods with deflation for computing multiple eigenvalues, and compare them in arithmetic and memory cost. Variable indefinite preconditioning is shown to be effective to accelerate convergence when some desired eigenvalues are not close to the lowest or highest eigenvalue. The efficiency of variants of PCG is illustrated by numerical experiments. Overall, the locally optimal block preconditioned conjugate gradient (LOBPCG) is the most efficient method, as in the linear setting. AMS subject classifications. 65F15, 65F50, 15A18, 15A22.
Nonconvex Constrained Optimization
, 2007
"... Fast nonlinear programming methods following the allatonce approach usually employ Newton’s method for solving linearized KarushKuhnTucker (KKT) systems. In nonconvex problems, the Newton direction is only guaranteed to be a descent direction if the Hessian of the Lagrange function is positive d ..."
Abstract
 Add to MetaCart
(Show Context)
Fast nonlinear programming methods following the allatonce approach usually employ Newton’s method for solving linearized KarushKuhnTucker (KKT) systems. In nonconvex problems, the Newton direction is only guaranteed to be a descent direction if the Hessian of the Lagrange function is positive definite on the nullspace of the active constraints, otherwise some modifications to Newton’s method are necessary. This condition can be verified using the signs of the KKT’s eigenvalues (inertia), which are usually available from direct solvers for the arising linear saddle point problems. Iterative solvers are mandatory for very largescale problems, but in general do not provide the inertia. Here we present a preconditioner based on a multilevel incomplete LBL T factorization, from which an approximation of the inertia can be obtained. The suitability of the heuristics for application in optimization methods is verified on an interior point method applied to the CUTE and COPS test problems, on largescale 3D PDEconstrained optimal control problems, as well as 3D
unknown title
, 2012
"... A nonlinear programming approach for estimation of transmission parameters in childhood infectious disease using a continuous time model ..."
Abstract
 Add to MetaCart
A nonlinear programming approach for estimation of transmission parameters in childhood infectious disease using a continuous time model
INTUITION BEHIND PRIMALDUAL INTERIORPOINT METHODS FOR LINEAR AND QUADRATIC PROGRAMMING
"... programming. The linear programming problem crops up in all sorts of courses, from mathematical economics to linear algebra. This short monograph cannot possibly supplant any text or course. That being said, I find that most of the texts I have encountered present the key results in such a way that ..."
Abstract
 Add to MetaCart
(Show Context)
programming. The linear programming problem crops up in all sorts of courses, from mathematical economics to linear algebra. This short monograph cannot possibly supplant any text or course. That being said, I find that most of the texts I have encountered present the key results in such a way that they are stripped of all meaning. The text leaves it up to the reader to do all the hard work in forming a connection between proof and application. In other words, most texts on linear programming are curiously devoid of intuition. There are, of course, exceptions [10]. I thought I’d take a few moments to give the basic intuition behind the interiorpoint approach to linear programming and, in particular, interiorpoint methods that take steps simultaneously in both the primal and dual variables. The 1992 article by Gonzaga [4] appears to be a wellpresented and thorough review of the subject. Many of the derivations here follow from those presented in [7]. In its standard formulation, the linear program is stated as follows (see for instance [9, 12]). We are provided with a vector r of length n representing linear costs, and a collection of m constraints specified by the rows of an m × n matrix A. We’ll assume
unknown title
, 2015
"... A primaldual active set method and predictorcorrector mesh adaptivity for computing fracture propagation using a phasefield approach ..."
Abstract
 Add to MetaCart
(Show Context)
A primaldual active set method and predictorcorrector mesh adaptivity for computing fracture propagation using a phasefield approach
ON SEQUENTIAL QUADRATIC PROGRAMMING METHODS EMPLOYING SECOND DERIVATIVES∗
, 2015
"... We consider sequential quadratic programming methods (SQP) globalized by linesearch for the standard exact penalty function. It is well known that if the Hessian of the Lagrangian is used in SQP subproblems, the obtained direction may not be of descent for the penalty function. The reason is that th ..."
Abstract
 Add to MetaCart
(Show Context)
We consider sequential quadratic programming methods (SQP) globalized by linesearch for the standard exact penalty function. It is well known that if the Hessian of the Lagrangian is used in SQP subproblems, the obtained direction may not be of descent for the penalty function. The reason is that the Hessian need not be positive definite, even locally, under any natural assumptions. Thus, if a given SQP version computes the Hessian, it may need to be adjusted in some way which ensures that the computed direction becomes of descent after a finite number of Hessian modifications (for example, by consecutively adding to it some multiple of the identity matrix, or using any other technique which guarantees the needed property after a few steps). As our theoretical contribution, we show that despite the Hessian not being positive definite, such modifications are actually not needed to guarantee the descent property when the iterates are close to a solution satisfying natural assumptions. The assumptions, in fact, are exactly the same as those required for local superlinear convergence of SQP in the first place (uniqueness of the Lagrange multipliers and the secondorder sufficient condition). Moreover, in our computational experiments on the Hock–Schittkowski test