Results 1  10
of
19
An iterative solverbased infeasible primaldual pathfollowing algorithm for convex quadratic programming
 SIAM J. OPTIM
, 2006
"... In this paper we develop a longstep primaldual infeasible pathfollowing algorithm for convex quadratic programming (CQP) whose search directions are computed by means of a preconditioned iterative linear solver. We propose a new linear system, which we refer to as the augmented normal equation ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
In this paper we develop a longstep primaldual infeasible pathfollowing algorithm for convex quadratic programming (CQP) whose search directions are computed by means of a preconditioned iterative linear solver. We propose a new linear system, which we refer to as the augmented normal equation (ANE), to determine the primaldual search directions. Since the condition number of the ANE coefficient matrix may become large for degenerate CQP problems, we use a maximum weight basis preconditioner introduced in [A. R. L. Oliveira and D. C. Sorensen, Linear
On the Convergence of an Inexact PrimalDual Interior Point Method for Linear Programming
, 2000
"... The inexact primaldual interior point method which is discussed in this paper chooses a new iterate along an approximation to the Newton direction. The method is the Kojima, Megiddo, and Mizuno globally convergent infeasible interior point algorithm. The inexact variation takes distinct step length ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
The inexact primaldual interior point method which is discussed in this paper chooses a new iterate along an approximation to the Newton direction. The method is the Kojima, Megiddo, and Mizuno globally convergent infeasible interior point algorithm. The inexact variation takes distinct step length in both the primal and dual spaces and is globally convergent. Key Words. Linear programming, inexact primaldual interior point algorithm, inexact search direction, short step lengths, termination criteria, global convergence 1 Introduction Consider the primal linear programming problem minimize c T x subject to: Ax = b; x 0; (1a) where A is an mbyn matrix of full rank m, b an mvector, and c an nvector; and its dual problem maximize b T y subject to: A T y + z = c; z 0: (1b) Technical report number 188, Department of Informatics, University of Bergen 1 The optimality conditions for the linear program pair (1a) and (1b) are the KarushKuhnTucker (KKT) conditions: F (x;...
Computational issues for a new class of preconditioners
 LargeScale Scientific Computations of Engineering and Environmental Problems II, Series Notes on Numerical Fluid Mechanics
, 2000
"... ..."
Preconditioning Indefinite Systems in Interior Point Methods for Large Scale Linear Optimization
, 2007
"... 1 Preconditioning Indefinite Systems in Interior Point Methods for Large Scale Linear Optimization Abstract We discuss the use of preconditioned conjugate gradients method for solving the reducedKKT systems arising in interior point algorithms for linear programming. The (indefinite) augmented syste ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
1 Preconditioning Indefinite Systems in Interior Point Methods for Large Scale Linear Optimization Abstract We discuss the use of preconditioned conjugate gradients method for solving the reducedKKT systems arising in interior point algorithms for linear programming. The (indefinite) augmented system form of this linear system has a number of advantages, notably a higherdegree of sparsity than the (positive definite) normal equations form. Therefore we use the conjugate gradients method to solve the augmented system and look for a suitable preconditioner. An explicit null space representation of linear constraints is constructed by using a nonsingular basis matrix identified from an estimate of the optimal partition in the linear program. This is achieved by means of recently developed efficient basis matrix factorisation techniqueswhich exploit hypersparsity and are used in implementations of the revised simplex method. The approach has been implemented within the HOPDM interior point solver and appliedto medium and largescale problems from public domain test collections. Computational experience is encouraging.
Convergence Analysis of a LongStep PrimalDual Infeasible InteriorPoint LP Algorithm Based on Iterative Linear Solvers
, 2003
"... In this paper, we consider a modified version of a wellknown longstep primaldual infeasible IP algorithm for solving the linear program min{cT x: Ax = b, x ≥ 0}, A ∈ Rm×n, where the search directions are computed by means of an iterative linear solver applied to a preconditioned normal system of ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
In this paper, we consider a modified version of a wellknown longstep primaldual infeasible IP algorithm for solving the linear program min{cT x: Ax = b, x ≥ 0}, A ∈ Rm×n, where the search directions are computed by means of an iterative linear solver applied to a preconditioned normal system of equations. We show that the number of (inner) iterations of the iterative linear solver at each (outer) iteration of the algorithm is bounded by a polynomial in m, n and a certain condition number associated with A, while the number of outer iterations is bounded by O(n 2 log ɛ −1), where ɛ is a given relative accuracy level. As a special case, it follows that the total number of inner iterations is polynomial in m and n for the minimum cost network flow problem.
On the Properties of Preconditioners for Robust Linear Regression
, 2000
"... In this paper, we consider solving the robust linear regression problem y = Ax + ∈ by an inexact Newton method and an iteratively reweighted least squares method. We show that each of these methods can be combined with the preconditioned conjugate gradient least square algorithm to solve large, spar ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
In this paper, we consider solving the robust linear regression problem y = Ax + ∈ by an inexact Newton method and an iteratively reweighted least squares method. We show that each of these methods can be combined with the preconditioned conjugate gradient least square algorithm to solve large, sparse systems of linear equations efficiently. We consider the constant preconditioner ATA and preconditioners based on lowrank updates and downdates of existing matrix factorizations. Numerical results are given to demonstrate the effectiveness of these preconditioners. 1.
A New Function for Robust Linear Regression: An Iterative Approach
 16th IMACS WORLD CONGRESS 2000 on Scientific Computation, Applied Mathematics and Simulation
, 2000
"... In this paper, we consider solving the robust linear regression problem. We show that IRLS and Newton method can each be combined with preconditioned conjugate gradient least squares method to solve large, sparse, rectangular systems of linear, algebraic equations efficiently. We define a new functi ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
In this paper, we consider solving the robust linear regression problem. We show that IRLS and Newton method can each be combined with preconditioned conjugate gradient least squares method to solve large, sparse, rectangular systems of linear, algebraic equations efficiently. We define a new function that leads to a cheap preconditioner. Further, for this function, we show that the upper bound on the condition number of the preconditioned matrix is independent of the conditioning of the data matrix (is determined by a predefined constant). We give numerical results that demonstrate the effectiveness of preconditioners based on this function. Key words: Robust regression, Iteratively reweighted least squares, Newton's method, New weighting function, Conjugate gradient least squares method, Preconditioner. AMS subject classifications: 62J05, 65D10, 65F10, 65F20. 1 Introduction Consider the standard linear regression model y = Ax + "; (1) where y 2 ! m is a vector of observations,...
Properties and Computational Issues of a Preconditioner for Interior Point Methods
, 1999
"... This is a collection of four conference proceedings on scientific computation. In the proceedings, we discuss solving a sequence of linear systems arising from the application of an interior point method to a linear programming problem. The sequence of linear systems is solved by alternating between ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
This is a collection of four conference proceedings on scientific computation. In the proceedings, we discuss solving a sequence of linear systems arising from the application of an interior point method to a linear programming problem. The sequence of linear systems is solved by alternating between a direct and an iterative method. The preconditioner is based on lowrank modifications of the coefficient matrix where a direct solution technique has been used. We compare two different techniques of forming the lowrank modification matrix; namely one by Wang and O'Leary [11] and the other by Baryamureeba, Steihaug and Zhang [3]. The theory and numerical testing strongly support the latter. We derive a sparse algorithm for modifying the Cholesky factors by a lowrank matrix, discuss the computational issues of this preconditioner, and finally give numerical results that show the approach of alternating between a direct and an iterative method to be promising. Key Words. Linear Programmi...
SOLVING SCALARIZED MULTIOBJECTIVE NETWORK FLOW PROBLEMS WITH AN INTERIOR POINT METHOD
, 2009
"... In this paper we present a primaldual interiorpoint algorithm to solve a class of multiobjective network flow problems. More precisely, our algorithm is an extension of the singleobjective primal infeasible dual feasible inexact interior point method for multiobjective linear network flow pro ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper we present a primaldual interiorpoint algorithm to solve a class of multiobjective network flow problems. More precisely, our algorithm is an extension of the singleobjective primal infeasible dual feasible inexact interior point method for multiobjective linear network flow problems. Our algorithm is contrasted with standard interior point methods and experimental results on biobjective instances are reported. The multiobjective instances are converted into single objective problems with the aid of an achievement function, which is particularly adequate for interactive decisionmaking methods.
An iterative solverbased longstep infeasible primaldual pathfollowing algorithm for convex QP based on a class of preconditioners
 OPTIMIZATION METHODS & SOFTWARE
, 2009
"... ..."