Results 1  10
of
19
Preconditioning indefinite systems in interior point methods for optimization
 COMPUTATIONAL OPTIMIZATION AND APPLICATIONS
, 2004
"... Every Newton step in an interiorpoint method for optimization requires a solution of a symmetric indefinite system of linear equations. Most of today’s codes apply direct solution methods to perform this task. The use of logarithmic barriers in interior point methods causes unavoidable illcondit ..."
Abstract

Cited by 65 (16 self)
 Add to MetaCart
(Show Context)
Every Newton step in an interiorpoint method for optimization requires a solution of a symmetric indefinite system of linear equations. Most of today’s codes apply direct solution methods to perform this task. The use of logarithmic barriers in interior point methods causes unavoidable illconditioning of linear systems and, hence, iterative methods fail to provide sufficient accuracy unless appropriately preconditioned. Two types of preconditioners which use some form of incomplete Cholesky factorization for indefinite systems are proposed in this paper. Although they involve significantly sparser factorizations than those used in direct approaches they still capture most of the numerical properties of the preconditioned system. The spectral analysis of the preconditioned matrix is performed: for convex optimization problems all the eigenvalues of this matrix are strictly positive. Numerical results are given for a set of public domain large linearly constrained convex quadratic programming problems with sizes reaching tens of thousands of variables. The analysis of these results reveals that the solution times for such problems on a modern PC are measured in minutes when direct methods are used and drop to seconds when iterative methods with appropriate preconditioners are used.
Automatic preconditioning by limited memory quasiNewton updating.
 SIAM Journal on Optimization,
, 2000
"... This paper deals with the preconditioning of truncated Newton methods for the solution of large scale nonlinear unconstrained optimization problems. We focus on preconditioners which can be naturally embedded in the framework of truncated Newton methods, i.e. which can be built without storing the ..."
Abstract

Cited by 44 (2 self)
 Add to MetaCart
(Show Context)
This paper deals with the preconditioning of truncated Newton methods for the solution of large scale nonlinear unconstrained optimization problems. We focus on preconditioners which can be naturally embedded in the framework of truncated Newton methods, i.e. which can be built without storing the Hessian matrix of the function to be minimized, but only based upon information on the Hessian obtained by the product of the Hessian matrix times a vector. In particular we propose a diagonal preconditioning which enjoys this feature and which enables us to examine the effect of diagonal scaling on truncated Newton methods. In fact, this new preconditioner carries out a scaling strategy and it is based on the concept of equilibration of the data in linear systems of equations. An extensive numerical testing has been performed showing that the diagonal preconditioning strategy proposed is very effective. In fact, on most problems considered, the resulting diagonal preconditioned truncated Newton method performs better than both the unpreconditioned method and the one using an automatic preconditioner based on limited memory quasiNewton updating (PREQN) recently proposed by Morales and Nocedal [Morales,
A Class of Preconditioners for Weighted Least Squares Problems
, 1999
"... We consider solving a sequence of weighted linear least squares problems where the changes from one problem to the next are the weights and the right hand side (or data). This is the case for primaldual interiorpoint methods. We derive a class of preconditioners based on a low rank correction to a ..."
Abstract

Cited by 19 (11 self)
 Add to MetaCart
We consider solving a sequence of weighted linear least squares problems where the changes from one problem to the next are the weights and the right hand side (or data). This is the case for primaldual interiorpoint methods. We derive a class of preconditioners based on a low rank correction to a Cholesky factorization of a weighted normal equation coefficient matrix with the previous weight. Key Words. Weighted linear least squares, Preconditioners, Preconditioned conjugate gradient for least squares, Linear programming, Primaldual infeasibleinteriorpoint algorithms. 1 Introduction In this paper, we present a class of preconditioners based on low rank corrections to the Cholesky factorization of a weighted normal equation coefficient matrix. This class of preconditioners leads to good performance for interiorpoint methods for linear programming. Particularly, we have implemented primaldual Newton method to test this class of preconditioners. The numerical results on large scale...
Computational issues for a new class of preconditioners
 LargeScale Scientific Computations of Engineering and Environmental Problems II, Series Notes on Numerical Fluid Mechanics
, 2000
"... ..."
Preconditioning Indefinite Systems in Interior Point Methods for Large Scale Linear Optimization
, 2007
"... 1 Preconditioning Indefinite Systems in Interior Point Methods for Large Scale Linear Optimization Abstract We discuss the use of preconditioned conjugate gradients method for solving the reducedKKT systems arising in interior point algorithms for linear programming. The (indefinite) augmented syste ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
1 Preconditioning Indefinite Systems in Interior Point Methods for Large Scale Linear Optimization Abstract We discuss the use of preconditioned conjugate gradients method for solving the reducedKKT systems arising in interior point algorithms for linear programming. The (indefinite) augmented system form of this linear system has a number of advantages, notably a higherdegree of sparsity than the (positive definite) normal equations form. Therefore we use the conjugate gradients method to solve the augmented system and look for a suitable preconditioner. An explicit null space representation of linear constraints is constructed by using a nonsingular basis matrix identified from an estimate of the optimal partition in the linear program. This is achieved by means of recently developed efficient basis matrix factorisation techniqueswhich exploit hypersparsity and are used in implementations of the revised simplex method. The approach has been implemented within the HOPDM interior point solver and appliedto medium and largescale problems from public domain test collections. Computational experience is encouraging.
Adaptive Constraint Reduction for Convex Quadratic Programming and Training Support Vector Machines
, 2008
"... Convex quadratic programming (CQP) is an optimization problem of minimizing a convex quadratic objective function subject to linear constraints. We propose an adaptive constraint reduction primaldual interiorpoint algorithm for convex quadratic programming with many more constraints than variables ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Convex quadratic programming (CQP) is an optimization problem of minimizing a convex quadratic objective function subject to linear constraints. We propose an adaptive constraint reduction primaldual interiorpoint algorithm for convex quadratic programming with many more constraints than variables. We reduce the computational effort by assembling the normal equation matrix with a subset of the constraints. Instead of the exact matrix, we compute an approximate matrix for a well chosen index set which includes indices of constraints that seem to be most critical. Starting with a large portion of the constraints, our proposed scheme excludes more unnecessary constraints at later iterations. We provide proofs for the global convergence and the quadratic local convergence rate of an affine scaling variant. A similar approach can be applied to Mehrotra’s predictorcorrector type algorithms. An example of CQP arises in training a linear support vector machine (SVM), which is a popular tool for pattern recognition. The difficulty in training a supportvector machine (SVM) lies in the typically vast number of patterns used for the training process. In this work, we propose an adaptive constraint reduction primaldual interiorpoint method for training the linear SVM with l1 hinge loss. We reduce the computational effort by assembling the normal equation matrix with a subset of wellchosen patterns. Starting with a large portion of the patterns, our proposed scheme excludes more and more unnecessary patterns as the iteration proceeds. We extend our approach to training nonlinear SVMs through Gram matrix approximation methods. Promising numerical results are reported.
Symbiosis between Linear Algebra and Optimization
, 1999
"... The efficiency and effectiveness of most optimization algorithms hinges on the numerical linear algebra algorithms that they utilize. Effective linear algebra is crucial to their success, and because of this, optimization applications have motivated fundamental advances in numerical linear algebra. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
The efficiency and effectiveness of most optimization algorithms hinges on the numerical linear algebra algorithms that they utilize. Effective linear algebra is crucial to their success, and because of this, optimization applications have motivated fundamental advances in numerical linear algebra. This essay will highlight contributions of numerical linear algebra to optimization, as well as some optimization problems encountered within linear algebra that contribute to a symbiotic relationship.
Numerical computation of cubic eigenvalue problems for a semiconductor quantum dot model with nonparabolic effective mass approximation
, 2001
"... Abstract. We consider the threedimensional Schrödinger equation simulating nanoscale semiconductor quantum dots with nonparabolic effective mass approximation. To discretize the equation, we use nonuniform meshes with halfshifted grid points in the radial direction. The discretization yields a v ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the threedimensional Schrödinger equation simulating nanoscale semiconductor quantum dots with nonparabolic effective mass approximation. To discretize the equation, we use nonuniform meshes with halfshifted grid points in the radial direction. The discretization yields a very large eigenproblem that only several eigenpairs embedded in the spectrum are interested. The eigenvalues and eigenvectors correspond to the energy states and wave functions of the quantum dots, respectively. Effective and efficient numerical algorithms for computing these values are essential for exploring their physical phenomena and related practical applications. We provide insights into the resulting matrix structures that reduce the 3D problem to a set of independent 2D eigenproblems. The reduction results in cubic λmatrix polynomial eigenproblems. Several numerical algorithms, such as the nonlinear JacobiDavidson method and the fixed point method based on the linear JacobiDavidson method, are then proposed for the solutions of these eigenproblems. For computing the successive eigenvalues, we suggest and analyze a novel explicit nonequivalence deflation technique with lowrank updates. Furthermore, we offer various acceleration schemes including Newton’s method to improve computational speed. All of the proposed algorithms have been implemented and successfully tested for solving the eigenproblems with sizes up to 76 millions. Numerical results are given to demonstrate the usefulness and efficiency of these algorithms.
Properties and Computational Issues of a Preconditioner for Interior Point Methods
, 1999
"... This is a collection of four conference proceedings on scientific computation. In the proceedings, we discuss solving a sequence of linear systems arising from the application of an interior point method to a linear programming problem. The sequence of linear systems is solved by alternating between ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
This is a collection of four conference proceedings on scientific computation. In the proceedings, we discuss solving a sequence of linear systems arising from the application of an interior point method to a linear programming problem. The sequence of linear systems is solved by alternating between a direct and an iterative method. The preconditioner is based on lowrank modifications of the coefficient matrix where a direct solution technique has been used. We compare two different techniques of forming the lowrank modification matrix; namely one by Wang and O'Leary [11] and the other by Baryamureeba, Steihaug and Zhang [3]. The theory and numerical testing strongly support the latter. We derive a sparse algorithm for modifying the Cholesky factors by a lowrank matrix, discuss the computational issues of this preconditioner, and finally give numerical results that show the approach of alternating between a direct and an iterative method to be promising. Key Words. Linear Programmi...