Results 1  10
of
191
The University of Florida sparse matrix collection
 NA DIGEST
, 1997
"... The University of Florida Sparse Matrix Collection is a large, widely available, and actively growing set of sparse matrices that arise in real applications. Its matrices cover a wide spectrum of problem domains, both those arising from problems with underlying 2D or 3D geometry (structural enginee ..."
Abstract

Cited by 538 (19 self)
 Add to MetaCart
The University of Florida Sparse Matrix Collection is a large, widely available, and actively growing set of sparse matrices that arise in real applications. Its matrices cover a wide spectrum of problem domains, both those arising from problems with underlying 2D or 3D geometry (structural engineering, computational fluid dynamics, model reduction, electromagnetics, semiconductor devices, thermodynamics, materials, acoustics, computer graphics/vision, robotics/kinematics, and other discretizations) and those that typically do not have such geometry (optimization, circuit simulation, networks and graphs, economic and financial modeling, theoretical and quantum chemistry, chemical process simulation, mathematics and statistics, and power networks). The collection meets a vital need that artificiallygenerated matrices cannot meet, and is widely used by the sparse matrix algorithms community for the development and performance evaluation of sparse matrix algorithms. The collection includes software for accessing and managing the collection, from MATLAB, Fortran, and C.
Preconditioning techniques for large linear systems: A survey
 J. COMPUT. PHYS
, 2002
"... This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization i ..."
Abstract

Cited by 189 (5 self)
 Add to MetaCart
(Show Context)
This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization issues, and block and multilevel extensions. Some of the challenges ahead are also discussed. An extensive bibliography completes the paper.
The fast multipole method: numerical implementation
 J. Comput. Phys
, 2000
"... We study integral methods applied to the resolution of the Maxwell equations where the linear system is solved using an iterative method which requires only matrix–vector products. The fast multipole method (FMM) is one of the most efficient methods used to perform matrix–vector products and accele ..."
Abstract

Cited by 85 (2 self)
 Add to MetaCart
(Show Context)
We study integral methods applied to the resolution of the Maxwell equations where the linear system is solved using an iterative method which requires only matrix–vector products. The fast multipole method (FMM) is one of the most efficient methods used to perform matrix–vector products and accelerate the resolution of the linear system. A problem involving N degrees of freedom may be solved in CN iter N log N floating operations, where C is a constant depending on the implementation of the method. In this article several techniques allowing one to reduce the constant C are analyzed. This reduction implies a lower total CPU time and a larger range of application of the FMM. In particular, new interpolation and anterpolation schemes are proposed which greatly improve on previous algorithms. Several numerical tests are also described. These confirm the efficiency and the theoretical
A Priori Sparsity Patterns For Parallel Sparse Approximate Inverse Preconditioners
, 1998
"... . Parallel algorithms for computing sparse approximations to the inverse of a sparse matrix either use a prescribed sparsity pattern for the approximate inverse, or attempt to generate a good pattern as part of the algorithm. This paper demonstrates that for PDE problems, the patterns of powers of s ..."
Abstract

Cited by 71 (6 self)
 Add to MetaCart
. Parallel algorithms for computing sparse approximations to the inverse of a sparse matrix either use a prescribed sparsity pattern for the approximate inverse, or attempt to generate a good pattern as part of the algorithm. This paper demonstrates that for PDE problems, the patterns of powers of sparsied matrices (PSM's) can be used a priori as eective approximate inverse patterns, and that the additional eort of adaptive sparsity pattern calculations may not be required. PSM patterns are related to various other approximate inverse sparsity patterns through matrix graph theory and heuristics about the PDE's Green's function. A parallel implementation shows that PSMpatterned approximate inverses are signicantly faster to construct than approximate inverses constructed adaptively, while often giving preconditioners of comparable quality. Key words. preconditioned iterative methods, sparse approximate inverses, graph theory, parallel computing AMS subject classications. 65F10, ...
Sparse Approximate Inverse Preconditioning For Dense Linear Systems Arising In Computational Electromagnetics
 Numerical Algorithms
, 1997
"... . We investigate the use of sparse approximate inverse preconditioners for the iterative solution of linear systems with dense complex coefficient matrices arising from industrial electromagnetic problems. An approximate inverse is computed via a Frobenius norm approach with a prescribed nonzero pat ..."
Abstract

Cited by 59 (20 self)
 Add to MetaCart
(Show Context)
. We investigate the use of sparse approximate inverse preconditioners for the iterative solution of linear systems with dense complex coefficient matrices arising from industrial electromagnetic problems. An approximate inverse is computed via a Frobenius norm approach with a prescribed nonzero pattern. Some strategies for determining the nonzero pattern of an approximate inverse are described. The results of numerical experiments suggest that sparse approximate inverse preconditioning is a viable approach for the solution of largescale dense linear systems on parallel computers. Key words. Dense linear systems, preconditioning, sparse approximate inverses, complex symmetric matrices, scattering calculations, Krylov subspace methods, parallel computing. AMS(MOS) subject classification. 65F10, 65F50, 65R20, 65N38, 7808, 78A50, 78A55. 1. Introduction. In the last decade, a significant amount of effort has been spent on the simulation of electromagnetic wave propagation phenomena to ad...
Preconditioning highly indefinite and nonsymmetric matrices
 SIAM J. SCI. COMPUT
, 2000
"... Standard preconditioners, like incomplete factorizations, perform well when the coefficient matrix is diagonally dominant, but often fail on general sparse matrices. We experiment with nonsymmetric permutationsand scalingsaimed at placing large entrieson the diagonal in the context of preconditionin ..."
Abstract

Cited by 55 (3 self)
 Add to MetaCart
(Show Context)
Standard preconditioners, like incomplete factorizations, perform well when the coefficient matrix is diagonally dominant, but often fail on general sparse matrices. We experiment with nonsymmetric permutationsand scalingsaimed at placing large entrieson the diagonal in the context of preconditioning for general sparse matrices. The permutations and scalings are those developed by Olschowka and Neumaier [Linear Algebra Appl., 240 (1996), pp. 131–151] and by Duff and
Robust approximate inverse preconditioning for the conjugate gradient method
 SIAM J. SCI. COMPUT
, 2000
"... We present a variant of the AINV factorized sparse approximate inverse algorithm which is applicable to any symmetric positive definite matrix. The new preconditioner is breakdownfree and, when used in conjunction with the conjugate gradient method, results in a reliable solver for highly illcondit ..."
Abstract

Cited by 55 (11 self)
 Add to MetaCart
We present a variant of the AINV factorized sparse approximate inverse algorithm which is applicable to any symmetric positive definite matrix. The new preconditioner is breakdownfree and, when used in conjunction with the conjugate gradient method, results in a reliable solver for highly illconditioned linear systems. We also investigate an alternative approach to a stable approximate inverse algorithm, based on the idea of diagonally compensated reduction of matrix entries. The results of numerical tests on challenging linear systems arising from finite element modeling of elasticity and diffusion problems are presented.
An efficient TVL1 algorithm for deblurring multichannel images corrupted by impulsive noise
 SIAM J. SCI. COMPUT
, 2009
"... We extend the alternating minimization algorithm recently proposed in [38, 39] to the case of recovering blurry multichannel (color) images corrupted by impulsive rather than Gaussian noise. The algorithm minimizes the sum of a multichannel extension of total variation (TV), either isotropic or anis ..."
Abstract

Cited by 50 (8 self)
 Add to MetaCart
(Show Context)
We extend the alternating minimization algorithm recently proposed in [38, 39] to the case of recovering blurry multichannel (color) images corrupted by impulsive rather than Gaussian noise. The algorithm minimizes the sum of a multichannel extension of total variation (TV), either isotropic or anisotropic, and a data fidelity term measured in the L1norm. We derive the algorithm by applying the wellknown quadratic penalty function technique and prove attractive convergence properties including finite convergence for some variables and global qlinear convergence. Under periodic boundary conditions, the main computational requirements of the algorithm are fast Fourier transforms and a lowcomplexity Gaussian elimination procedure. Numerical results on images with different blurs and impulsive noise are presented to demonstrate the efficiency of the algorithm. In addition, it is numerically compared to an algorithm recently proposed in [20] that uses a linear program and an interior point method for recovering grayscale images.
Bounds for the entries of matrix functions with applications to preconditioning
 BIT
, 1999
"... Let A be a symmetric matrix and let f be a smooth function defined on an interval containing the spectrum of A. Generalizing a wellknown result of Demko, Moss and Smith on the decay of the inverse we show that when A is banded, the entries of f(A)are bounded in an exponentially decaying manner away ..."
Abstract

Cited by 44 (15 self)
 Add to MetaCart
(Show Context)
Let A be a symmetric matrix and let f be a smooth function defined on an interval containing the spectrum of A. Generalizing a wellknown result of Demko, Moss and Smith on the decay of the inverse we show that when A is banded, the entries of f(A)are bounded in an exponentially decaying manner away from the main diagonal. Bounds obtained by representing the entries of f(A) in terms of Riemann–Stieltjes integrals and by approximating such integrals by Gaussian quadrature rules are also considered. Applications of these bounds to preconditioning are suggested and illustrated by a few numerical examples.