Results 1  10
of
11
Algorithm 887: Cholmod, supernodal sparse cholesky factorization and update/downdate
 ACM Transactions on Mathematical Software
, 2008
"... CHOLMOD is a set of routines for factorizing sparse symmetric positive definite matrices of the form A or A A T, updating/downdating a sparse Cholesky factorization, solving linear systems, updating/downdating the solution to the triangular system Lx = b, and many other sparse matrix functions for b ..."
Abstract

Cited by 109 (8 self)
 Add to MetaCart
CHOLMOD is a set of routines for factorizing sparse symmetric positive definite matrices of the form A or A A T, updating/downdating a sparse Cholesky factorization, solving linear systems, updating/downdating the solution to the triangular system Lx = b, and many other sparse matrix functions for both symmetric and unsymmetric matrices. Its supernodal Cholesky factorization relies on LAPACK and the Level3 BLAS, and obtains a substantial fraction of the peak performance of the BLAS. Both real and complex matrices are supported. CHOLMOD is written in ANSI/ISO C, with both C and MATLAB TM interfaces. It appears in MATLAB 7.2 as x=A\b when A is sparse symmetric positive definite, as well as in several other sparse matrix functions.
A numerical evaluation of sparse direct solvers for the solution of large sparse, symmetric linear systems of equations
, 2005
"... ..."
WSMP: Watson sparse matrix package part I—direct solution of symmetric sparse system
 Center, Yorktown Heights
, 2010
"... ..."
unknown title
"... Abstract. A key technique for controlling numerical stability in sparse direct solvers is threshold partial pivoting. When selecting a pivot, the entire candidate pivot column below the diagonal must be uptodate and must be scanned. If the factorization is parallelized across a large number of cor ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. A key technique for controlling numerical stability in sparse direct solvers is threshold partial pivoting. When selecting a pivot, the entire candidate pivot column below the diagonal must be uptodate and must be scanned. If the factorization is parallelized across a large number of cores, communication latencies can be the dominant computational cost. In this paper, we propose two alternative pivoting strategies for sparse symmetric indefinite matrices of full rank that significantly reduce communication by compressing the necessary data into a small matrix that can be used to select pivots. Once pivots have been chosen, they can be applied in a communicationefficient fashion. For an n×p submatrix on P processors, we show our methods perform a factorization using O(logP) messages instead of the O(p logP) for threshold partial pivoting. The additional costs in terms of operations and communication bandwidth are relatively small. A stability proof is given and numerical results using a range of symmetric indefinite matrices arising from practical problems are used to demonstrate the practical robustness. Timing results on large random examples illustrate the potential speedup on current multicore machines.
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. (2012) Published online in Wiley Online Library (wileyonlinelibrary.com). DOI: 10.1002/nla.1810
"... The analyse phase of a sparse direct solver for symmetrically structured linear systems of equations is used to determine the sparsity pattern of the matrix factor. This allows the subsequent numerical factorisation and solve phases to be executed efficiently. Many direct solvers require the system ..."
Abstract
 Add to MetaCart
The analyse phase of a sparse direct solver for symmetrically structured linear systems of equations is used to determine the sparsity pattern of the matrix factor. This allows the subsequent numerical factorisation and solve phases to be executed efficiently. Many direct solvers require the system matrix to be in assembled form. For problems arising from finite element applications, assembling and then using the system matrix can be costly in terms of both time and memory. This paper describes and implements a variant of the work of Gilbert, Ng and Peyton for matrices in elemental form. The proposed variant works with an equivalent matrix that avoids explicitly assembling the system matrix and exploits supervariables. Numerical experiments using problems from practical applications are used to demonstrate the significant advantages of working directly with the elemental form. Copyright © 2012 John Wiley & Sons, Ltd.
unknown title
"... Abstract. The rapid emergence of multicore machines has led to the need to design new algorithms that are efficient on these architectures. Here, we consider the solution of sparse symmetric positivedefinite linear systems by Cholesky factorization. We were motivated by the successful division of t ..."
Abstract
 Add to MetaCart
Abstract. The rapid emergence of multicore machines has led to the need to design new algorithms that are efficient on these architectures. Here, we consider the solution of sparse symmetric positivedefinite linear systems by Cholesky factorization. We were motivated by the successful division of the computation in the dense case into tasks on blocks and use of a task manager to exploit all the parallelism that is available between these tasks, whose dependencies may be represented by a directed acyclic graph (DAG). Our sparse algorithm is built on the assembly tree and subdivides the work at each node into tasks on blocks of the Cholesky factor. The dependencies between these tasks may again be represented by a DAG. To limit memory requirements, blocks are updated directly rather than through generatedelement matrices. Our algorithm is implemented within a new efficient and portable solver HSL MA87. It is written in Fortran 95 plus OpenMP and is available as part of the software library HSL. Using problems arising from a range of applications, we present experimental results that support our design choices and demonstrate that HSL MA87 obtains good serial and parallel times on our 8core test machines. Comparisons are made with existing modern solvers and show that HSL MA87 performs well, particularly in the case of very large problems.
A LargeScale Quadratic . . .
, 2008
"... Quadratic programming (QP) problems arise naturally in a variety of applications. In many cases, a good estimate of the solution may be available. It is desirable to be able to utilize such information in order to reduce the computational cost of finding the solution. Activeset methods for solving Q ..."
Abstract
 Add to MetaCart
Quadratic programming (QP) problems arise naturally in a variety of applications. In many cases, a good estimate of the solution may be available. It is desirable to be able to utilize such information in order to reduce the computational cost of finding the solution. Activeset methods for solving QP problems differ from interiorpoint methods in being able to take full advantage of such warmstart situations. QPBLU is a new Fortran 95 package for minimizing a convex quadratic function with linear constraints and bounds. QPBLU is an activeset method that uses blockLU updates of an initial KKT system to handle activeset changes as well as lowrank Hessian updates. It is intended for convex QP problems in which the linear constraint matrix is sparse and many degrees of freedom are expected at the solution. Warm start capabilities allow the solver to take advantage of good estimates of the optimal active set or solution. A key feature of the method is the ability to utilize a variety of sparse linear system packages to solve the KKT systems. QPBLU has been tested on QP problems derived from linear programming problems
Science and Technology Facilities Council preprints are available online
, 1361
"... Neither the Council nor the Laboratory accept any responsibility for loss or damage arising from the use of information contained in any of their reports or in any communication about their tests or investigations. New parallel sparse direct solvers for engineering applications Jonathan Hogg and Jen ..."
Abstract
 Add to MetaCart
(Show Context)
Neither the Council nor the Laboratory accept any responsibility for loss or damage arising from the use of information contained in any of their reports or in any communication about their tests or investigations. New parallel sparse direct solvers for engineering applications Jonathan Hogg and Jennifer Scott 1 At the heart of many computations in engineering lies the need to efficiently and accurately solve large sparse linear systems of equations. Direct methods are frequently the method of choice because of their robustness, accuracy and their potential for use as blackbox solvers. In the last few years, there have been many new developments and a number of new modern parallel generalpurpose sparse solvers have been written for inclusion within the HSL mathematical software library
unknown title
, 2005
"... A numerical evaluation of sparse direct solvers for the solution of large sparse, symmetric linear systems of equations ..."
Abstract
 Add to MetaCart
A numerical evaluation of sparse direct solvers for the solution of large sparse, symmetric linear systems of equations