Results 1  10
of
36
Algorithm 887: Cholmod, supernodal sparse cholesky factorization and update/downdate
 ACM Transactions on Mathematical Software
, 2008
"... CHOLMOD is a set of routines for factorizing sparse symmetric positive definite matrices of the form A or A A T, updating/downdating a sparse Cholesky factorization, solving linear systems, updating/downdating the solution to the triangular system Lx = b, and many other sparse matrix functions for b ..."
Abstract

Cited by 111 (8 self)
 Add to MetaCart
CHOLMOD is a set of routines for factorizing sparse symmetric positive definite matrices of the form A or A A T, updating/downdating a sparse Cholesky factorization, solving linear systems, updating/downdating the solution to the triangular system Lx = b, and many other sparse matrix functions for both symmetric and unsymmetric matrices. Its supernodal Cholesky factorization relies on LAPACK and the Level3 BLAS, and obtains a substantial fraction of the peak performance of the BLAS. Both real and complex matrices are supported. CHOLMOD is written in ANSI/ISO C, with both C and MATLAB TM interfaces. It appears in MATLAB 7.2 as x=A\b when A is sparse symmetric positive definite, as well as in several other sparse matrix functions.
A numerical evaluation of sparse direct solvers for the solution of large sparse, symmetric linear systems of equations
, 2005
"... ..."
Dynamic supernodes in sparse Cholesky update/downdate and triangular solves
 ACM Trans. Math. Software
, 2006
"... The supernodal method for sparse Cholesky factorization represents the factor L as a set of supernodes, each consisting of a contiguous set of columns of L with identical nonzero pattern. A conventional supernode is stored as a dense submatrix. While this is suitable for sparse Cholesky factorizatio ..."
Abstract

Cited by 30 (10 self)
 Add to MetaCart
(Show Context)
The supernodal method for sparse Cholesky factorization represents the factor L as a set of supernodes, each consisting of a contiguous set of columns of L with identical nonzero pattern. A conventional supernode is stored as a dense submatrix. While this is suitable for sparse Cholesky factorization where the nonzero pattern of L does not change, it is not suitable for methods that modify a sparse Cholesky factorization after a lowrank change to A (an update/downdate, A = A±WW T). Supernodes merge and split apart during an update/downdate. Dynamic supernodes are introduced, which allow a sparse Cholesky update/downdate to obtain performance competitive with conventional supernodal methods. A dynamic supernodal solver is shown to exceed the performance of the conventional (BLASbased) supernodal method for solving triangular systems. These methods are incorporated into CHOLMOD, a sparse Cholesky factorization and update/downdate package, which forms the basis of x=A\b in MATLAB when A is sparse and symmetric positive definite. 1
An OutofCore Sparse Cholesky Solver
, 2009
"... Direct methods for solving large sparse linear systems of equations are popular because of their generality and robustness. Their main weakness is that the memory they require usually increases rapidly with problem size. We discuss the design and development of the first release of a new symmetric d ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
Direct methods for solving large sparse linear systems of equations are popular because of their generality and robustness. Their main weakness is that the memory they require usually increases rapidly with problem size. We discuss the design and development of the first release of a new symmetric direct solver that aims to circumvent this limitation by allowing the system matrix, intermediate data, and the matrix factors to be stored externally. The code, which is written in Fortran and called HSL MA77, implements a multifrontal algorithm. The first release is for positivedefinite systems and performs a Cholesky factorization. Special attention is paid to the use of efficient dense linear algebra kernel codes that handle the fullmatrix operations on the frontal matrix and to the input/output operations. The input/output operations are performed using a separate package that provides a virtualmemory system and allows the data to be spread over many files; for very large problems these may be held on more than one device. Numerical results are presented for a collection of 30 large realworld problems, all of which were solved successfully.
Algorithm 8xx: CHOLMOD, supernodal sparse Cholesky factorization and update/downdate
, 2006
"... ..."
(Show Context)
A preliminary outofcore extension of a parallel multifrontal solver
 In EuroPar’06 Parallel Processing, Lecture Notes in Computer Science
, 2006
"... Abstract. The memory usage of sparse direct solvers can be the bottleneck to solve largescale problems. This paper describes a first implementation of an outofcore extension to a parallel multifrontal solver (MUMPS). We show that larger problems can be solved on limitedmemory machines with reas ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
(Show Context)
Abstract. The memory usage of sparse direct solvers can be the bottleneck to solve largescale problems. This paper describes a first implementation of an outofcore extension to a parallel multifrontal solver (MUMPS). We show that larger problems can be solved on limitedmemory machines with reasonable performance, and we illustrate the behaviour of our parallel outofcore factorization. Then we use simulations to discuss how our algorithms can be modified to solve much larger problems.
Combinatorial problems in solving linear systems
, 2009
"... Numerical linear algebra and combinatorial optimization are vast subjects; as is their interaction. In virtually all cases there should be a notion of sparsity for a combinatorial problem to arise. Sparse matrices therefore form the basis of the interaction of these two seemingly disparate subjects. ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
Numerical linear algebra and combinatorial optimization are vast subjects; as is their interaction. In virtually all cases there should be a notion of sparsity for a combinatorial problem to arise. Sparse matrices therefore form the basis of the interaction of these two seemingly disparate subjects. As the core of many of today’s numerical linear algebra computations consists of the solution of sparse linear system by direct or iterative methods, we survey some combinatorial problems, ideas, and algorithms relating to these computations. On the direct methods side, we discuss issues such as matrix ordering; bipartite matching and matrix scaling for better pivoting; task assignment and scheduling for parallel multifrontal solvers. On the iterative method side, we discuss preconditioning techniques including incomplete factorization preconditioners, support graph preconditioners, and algebraic multigrid. In a separate part, we discuss the block triangular form of sparse matrices.
A preliminary analysis of the outofcore solution phase of a parallel multifrontal approach
, 2006
"... We consider the parallel solution of sparse linear systems of equations in a limited memory environment. A preliminary outof core version of a sparse multifrontal code called MUMPS (MUltifrontal Massively Parallel Solver) has been developed as part of a collaboration with members of the INRIA proje ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
We consider the parallel solution of sparse linear systems of equations in a limited memory environment. A preliminary outof core version of a sparse multifrontal code called MUMPS (MUltifrontal Massively Parallel Solver) has been developed as part of a collaboration with members of the INRIA project GRAAL. In this context, we assume that the factors have been written on the hard disk during the factorization phase, and we discuss the design of an efficient solution phase. Two different approaches are presented to read data from the disk, with a discussion on the advantages and the drawbacks of each one. Our work differs and extends the work of [10] and [11] because firstly we consider a parallel outofcore context, and secondly we focus on the performance of the solve phase. 1
Experiences of Sparse Direct Symmetric Solvers
"... We recently carried out an extensive comparison of the performance of stateoftheart sparse direct solvers for the numerical solution of symmetric linear systems of equations. Some of these solvers were written primarily as research codes while others have been developed for commercial use. Our ex ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We recently carried out an extensive comparison of the performance of stateoftheart sparse direct solvers for the numerical solution of symmetric linear systems of equations. Some of these solvers were written primarily as research codes while others have been developed for commercial use. Our experiences of using the different packages to solve a wide range of problems arising from real applications were mixed. In this paper, we highlight some of these experiences with the aim of providing advice to both software developers and users of sparse direct solvers. We discuss key features that a direct solver should offer and conclude that while performance is an essential factor to consider when choosing a code, there are other features that a user should also consider looking for that vary significantly between packages. Categories and Subject Descriptors: G.1.0 [Numerical Analysis]: General—Numerical algorithms; G.1.3 [Numerical Analysis]: Numerical Linear Algebra—Sparse, structured, and very
Algebraic analysis of highpass quantization
 ACM TRANSACTIONS ON GRAPHICS
"... This paper presents an algebraic analysis of a meshcompression technique called highpass quantization [Sorkine et al. 2003]. In highpass quantization, a rectangular matrix based on the mesh topological Laplacian is applied to the vectors of the Cartesian coordinates of a polygonal mesh. The resul ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
This paper presents an algebraic analysis of a meshcompression technique called highpass quantization [Sorkine et al. 2003]. In highpass quantization, a rectangular matrix based on the mesh topological Laplacian is applied to the vectors of the Cartesian coordinates of a polygonal mesh. The resulting vectors, called δcoordinates, are then quantized. The applied matrix is a function of the topology of the mesh and the indices of a small set of mesh vertices (anchors), but not of the location of the vertices. An approximation of the geometry can be reconstructed from the quantized δcoordinates and the spatial locations of the anchors. In this paper we show how to algebraically bound the reconstruction error that this method generates. We show that the small singular value of the transformation matrix can be used to bound both the quantization error and the rounding error, which is due to the use of floatingpoint arithmetic. Furthermore, we prove a bound on this singular value. The bound is a function of the topology of the mesh and of the selected anchors. We also propose a new anchorselection algorithm, inspired by this bound. We show experimentally that the method is effective and that the computed upper bound on the error is not too pessimistic.