Results 11  20
of
314
Sparse Matrix Libraries in C++ for High Performance Architectures
, 1994
"... We describe an object oriented sparse matrix library in C++ designed for portability and performance across a wide class of machine architectures. Besides simplifying the subroutine interface, the object oriented design allows the same driving code to be used for various sparse matrix formats, thus ..."
Abstract

Cited by 55 (4 self)
 Add to MetaCart
We describe an object oriented sparse matrix library in C++ designed for portability and performance across a wide class of machine architectures. Besides simplifying the subroutine interface, the object oriented design allows the same driving code to be used for various sparse matrix formats, thus addressing many of the difficulties encountered with the typical approach to sparse matrix libraries. We also discuss the design of a C++ library for implementing various iterative methods for solving linear systems of equations. Performance results indicate that the C++ codes are competitive with optimized Fortran. 1 Introduction Sparse matrices are pervasive in scientific and engineering application codes. They often arise from finite difference, finite element, or finite volume discretizations of PDEs (e.g., in computational fluid dynamics) or from discrete, networktype problems (e.g., in circuit simulation). Over the past two decades, a number of research efforts have resulted in spars...
Approximate Inverse Techniques for BlockPartitioned Matrices
 SIAM J. Sci. Comput
, 1995
"... This paper proposes some preconditioning options when the system matrix is in blockpartitioned form. This form may arise naturally, for example from the incompressible NavierStokes equations, or may be imposed after a domain decomposition reordering. Approximate inverse techniques are used to g ..."
Abstract

Cited by 44 (12 self)
 Add to MetaCart
(Show Context)
This paper proposes some preconditioning options when the system matrix is in blockpartitioned form. This form may arise naturally, for example from the incompressible NavierStokes equations, or may be imposed after a domain decomposition reordering. Approximate inverse techniques are used to generate sparse approximate solutions whenever these are needed in forming the preconditioner. The storage requirements for these preconditioners may be much less than for ILU preconditioners for tough, largescale CFD problems. The numerical experiments reported show that these preconditioners can help us solve difficult linear systems whose coefficient matrices are highly indefinite. 1 Introduction Consider the block partitioning of a matrix A, in the form A = ` B F E C ' (1) where the blocking naturally occurs due the ordering of the equations and the variables. Matrices of this form arise in many applications, such as in the incompressible NavierStokes equations, where the sc...
Iterative Solution of Systems of Linear Equations Arising in the Context of Stochastic Finite Elements
"... The particular properties of systems of linear equations arising in the context of the Stochastic Finite Element Method (SFEM) motivate the customization of existing iterative solution algorithms. The implementation described in this paper has aimed at optimizing data management, MATVEC operations ..."
Abstract

Cited by 41 (2 self)
 Add to MetaCart
The particular properties of systems of linear equations arising in the context of the Stochastic Finite Element Method (SFEM) motivate the customization of existing iterative solution algorithms. The implementation described in this paper has aimed at optimizing data management, MATVEC operations and preconditioning strategies. It turns out that SFEMsystems can be solved with much less effort than their size suggests. The main idea is based on the fact that the full system matrix consists of few, relatively small submatrices with identical dimensions and sparsity pattern. This makes it very efficient to perform matrix vector multiplications at the submatrix level and to avoid the assembly of the full coefficient matrix. # 2000 Elsevier Science Ltd. All rights reserved.
A reliable and computationally efficient algorithm for imposing the saddle point property in dynamic models. Manuscript, Federal Reserve Board of Governors
"... linear saddle point models. The algorithm has proved useful in a wide array of applications including analyzing linear perfect foresight models, providing initial solutions and asymptotic constraints for nonlinear models. The algorithm solves linear problems with dozens of lags and leads and hundred ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
(Show Context)
linear saddle point models. The algorithm has proved useful in a wide array of applications including analyzing linear perfect foresight models, providing initial solutions and asymptotic constraints for nonlinear models. The algorithm solves linear problems with dozens of lags and leads and hundreds of equations in seconds. The technique works well for both symbolic algebra and numerical computation. Although widely used at the Federal Reserve, few outside the central bank know about or have used the algorithm. This paper attempts to present the current algorithm in a more accessible format in the hope that economists outside the Federal Reserve may also nd it useful. In addition, over the years there have been undocumented changes in approach that have improved the eciency and reliability of algorithm. This paper describes the present state of development of this set of tools.
Numeric Domains with Summarized Dimensions
, 2004
"... We introduce a systematic approach to designing summarizing abstract numeric domains from existing numeric domains. Summarizing domains use summary dimensions to represent potentially unbounded collections of numeric objects. Such domains are of benefit to analyses that verify properties of syste ..."
Abstract

Cited by 37 (13 self)
 Add to MetaCart
(Show Context)
We introduce a systematic approach to designing summarizing abstract numeric domains from existing numeric domains. Summarizing domains use summary dimensions to represent potentially unbounded collections of numeric objects. Such domains are of benefit to analyses that verify properties of systems with an unbounded number of numeric objects, such as shape analysis, or systems in which the number of numeric objects is bounded, but large.
Matrix transformations for computing rightmost eigenvalues of large sparse nonsymmetric eigenvalue problems
 IMA J. Numer. Anal
, 1996
"... This paper gives an overview of matrix transformations for finding rightmost eigenvalues of Ax = kx and Ax = kBx with A and B real nonsymmetric and B possibly singular. The aim is not to present new material, but to introduce the reader to the application of matrix transformations to the solution o ..."
Abstract

Cited by 33 (7 self)
 Add to MetaCart
This paper gives an overview of matrix transformations for finding rightmost eigenvalues of Ax = kx and Ax = kBx with A and B real nonsymmetric and B possibly singular. The aim is not to present new material, but to introduce the reader to the application of matrix transformations to the solution of largescale eigenvalue problems. The paper explains and discusses the use of Chebyshev polynomials and the shiftinvert and Cayley ^ transforms as matrix transformations for problems that arise from the discretization df partial differential equations. A few other techniques are described. The reliability of iterative methods is also dealt with by introducing the concept of domain of confidence or trust region. This overview gives the reader an idea of the benefits and the drawbacks of several transformation techniques. We also briefly discuss the current software
Overlapping domain decomposition algorithms for general sparse matrices
, 1996
"... Abstract. Domain decomposition methods for Finite Element problems using a partition based on the underlying nite element mesh have been extensively studied. In this paper, we discuss algebraic extensions of the class of overlapping domain decomposition algorithms for general sparse matrices. The su ..."
Abstract

Cited by 30 (14 self)
 Add to MetaCart
Abstract. Domain decomposition methods for Finite Element problems using a partition based on the underlying nite element mesh have been extensively studied. In this paper, we discuss algebraic extensions of the class of overlapping domain decomposition algorithms for general sparse matrices. The subproblems are created with an overlapping partition of the graph corresponding to the sparsity structure of the matrix. These algebraic domain decomposition methods are especially useful for unstructured mesh problems. We also discuss some di culties encountered in the algebraic extension, particularly the issues related to the coarse solver. Key words. Sparse matrix, iterative methods, preconditioning, graph partitioning, domain decomposition. 1. Introduction. The
Performance and scalability of preconditioned conjugate gradient methods on parallel computers
 Department of Computer Science, University of Minnesota
, 1995
"... ..."
When cache blocking sparse matrix vector multiply works and why
 In Proceedings of the PARA’04 Workshop on the Stateoftheart in Scientific Computing
, 2004
"... Abstract We present new performance models and more compact data structures for cache blocking when applied to sparse matrixvector multiply (SpM×V). We extend our prior models by relaxing the assumption that the vectors fit in cache and find that the new models are accurate enough to predict optimu ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
Abstract We present new performance models and more compact data structures for cache blocking when applied to sparse matrixvector multiply (SpM×V). We extend our prior models by relaxing the assumption that the vectors fit in cache and find that the new models are accurate enough to predict optimum block sizes. In addition, we determine criteria that predict when cache blocking improves performance. We conclude with architectural suggestions that would make memory systems execute SpM×V faster.
Using Generalized Cayley Transformations Within An Inexact Rational Krylov Sequence Method
 SIAM J. MATRIX ANAL. APPL
"... The rational Krylov sequence (RKS) method is a generalization of Arnoldi's method. It constructs an orthogonal reduction of a matrix pencil into an upper Hessenberg pencil. The RKS method is useful when the matrix pencil may be efficiently factored. This article considers approximately solving ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
The rational Krylov sequence (RKS) method is a generalization of Arnoldi's method. It constructs an orthogonal reduction of a matrix pencil into an upper Hessenberg pencil. The RKS method is useful when the matrix pencil may be efficiently factored. This article considers approximately solving the resulting linear systems with iterative methods. We show that a Cayley transformation leads to a more efficient and robust eigensolver than the usual shiftinvert transformation when the linear systems are solved inexactly within the RKS method. A relationship with the recently introduced JacobiDavidson method is also established.