Results 1 
6 of
6
Algorithm 887: Cholmod, supernodal sparse cholesky factorization and update/downdate
 ACM Transactions on Mathematical Software
, 2008
"... CHOLMOD is a set of routines for factorizing sparse symmetric positive definite matrices of the form A or A A T, updating/downdating a sparse Cholesky factorization, solving linear systems, updating/downdating the solution to the triangular system Lx = b, and many other sparse matrix functions for b ..."
Abstract

Cited by 109 (8 self)
 Add to MetaCart
CHOLMOD is a set of routines for factorizing sparse symmetric positive definite matrices of the form A or A A T, updating/downdating a sparse Cholesky factorization, solving linear systems, updating/downdating the solution to the triangular system Lx = b, and many other sparse matrix functions for both symmetric and unsymmetric matrices. Its supernodal Cholesky factorization relies on LAPACK and the Level3 BLAS, and obtains a substantial fraction of the peak performance of the BLAS. Both real and complex matrices are supported. CHOLMOD is written in ANSI/ISO C, with both C and MATLAB TM interfaces. It appears in MATLAB 7.2 as x=A\b when A is sparse symmetric positive definite, as well as in several other sparse matrix functions.
Row modifications of a sparse Cholesky factorization
 SIAM J. Matrix Anal. Appl
, 2005
"... Abstract. Given a sparse, symmetric positive definite matrix C and an associated sparse Cholesky factorization LDLT, we develop sparse techniques for updating the factorization after a symmetric modification of a row and column of C. We show how the modification in the Cholesky factorization associa ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
Abstract. Given a sparse, symmetric positive definite matrix C and an associated sparse Cholesky factorization LDLT, we develop sparse techniques for updating the factorization after a symmetric modification of a row and column of C. We show how the modification in the Cholesky factorization associated with this rank2 modification of C can be computed efficiently using a sparse rank1 technique developed in an earlier paper [SIAM J. Matrix Anal. Appl., 20 (1999), pp. 606627]. We also determine how the solution of a linear system Lx = b changes after changing a row and column of C or after a rankr change in C.
and applications for the placement of control devices
"... Elliptic optimal control problems with L1control cost ..."
(Show Context)
On the factorization of simplex basis matrices
"... In the simplex algorithm, solving linear systems with the basis matrix and its transpose accounts for a large part of the total computation time. The most widely used solution technique is sparse LU factorization, paired with an updating scheme that allows to use the factors over several iterations. ..."
Abstract
 Add to MetaCart
In the simplex algorithm, solving linear systems with the basis matrix and its transpose accounts for a large part of the total computation time. The most widely used solution technique is sparse LU factorization, paired with an updating scheme that allows to use the factors over several iterations. Clearly, small number of fillin elements in the LU factors is critical for the overall performance. Using a wide range of LPs we show numerically that after a simple permutation the nontriangular part of the basis matrix is so small, that the whole matrix can be factorized with (relative) fillin close to the optimum. This permutation has been exploited by simplex practitioners for many years. But to our knowledge no systematic numerical study has been published that demonstrates the effective reduction to a surprisingly small nontriangular problem, even for large scale LPs. For the factorization of the nontriangular part most existing simplex codes use some variant of dynamic Markowitz pivoting, which originated in the late 1950s. We also show numerically that, in terms of fillin and in the simplex context, dynamic Markowitz is quite consistently superior to
FULL LENGTH PAPER Dual multilevel optimization
"... Abstract We study the structure of dual optimization problems associated with linear constraints, bounds on the variables, and separable cost. We show how the separability of the dual cost function is related to the sparsity structure of the linear equations. As a result, techniques for ordering spa ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract We study the structure of dual optimization problems associated with linear constraints, bounds on the variables, and separable cost. We show how the separability of the dual cost function is related to the sparsity structure of the linear equations. As a result, techniques for ordering sparse matrices based on nested dissection or graph partitioning can be used to decompose a dual optimization problem into independent subproblems that could be solved in parallel. The performance of a multilevel implementation of the Dual Active Set Algorithm is compared with CPLEX Simplex and Barrier codes using Netlib linear programming test problems.