Results 1  10
of
55
Preconditioning techniques for large linear systems: A survey
 J. COMPUT. PHYS
, 2002
"... This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization i ..."
Abstract

Cited by 189 (5 self)
 Add to MetaCart
(Show Context)
This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization issues, and block and multilevel extensions. Some of the challenges ahead are also discussed. An extensive bibliography completes the paper.
Preconditioning highly indefinite and nonsymmetric matrices
 SIAM J. SCI. COMPUT
, 2000
"... Standard preconditioners, like incomplete factorizations, perform well when the coefficient matrix is diagonally dominant, but often fail on general sparse matrices. We experiment with nonsymmetric permutationsand scalingsaimed at placing large entrieson the diagonal in the context of preconditionin ..."
Abstract

Cited by 55 (3 self)
 Add to MetaCart
(Show Context)
Standard preconditioners, like incomplete factorizations, perform well when the coefficient matrix is diagonally dominant, but often fail on general sparse matrices. We experiment with nonsymmetric permutationsand scalingsaimed at placing large entrieson the diagonal in the context of preconditioning for general sparse matrices. The permutations and scalings are those developed by Olschowka and Neumaier [Linear Algebra Appl., 240 (1996), pp. 131–151] and by Duff and
Parallel Implementation and Practical Use of Sparse Approximate Inverse Preconditioners With a Priori Sparsity Patterns
 Int. J. High Perf. Comput. Appl
, 2001
"... This paper describes and tests a parallel, message passing code for constructing sparse approximate inverse preconditioners using Frobenius norm minimization. The sparsity patterns of the preconditioners are chosen as patterns of powers of sparsified matrices. Sparsification is necessary when powers ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
This paper describes and tests a parallel, message passing code for constructing sparse approximate inverse preconditioners using Frobenius norm minimization. The sparsity patterns of the preconditioners are chosen as patterns of powers of sparsified matrices. Sparsification is necessary when powers of a matrix have a large number of nonzeros, making the approximate inverse computation expensive. For our test problems, the minimum solution time is achieved with approximate inverses with fewer than twice the number of nonzeros of the original matrix. Additional accuracy is not compensated by the increased cost per iteration. The results lead to further understanding of how to use these methods and how well these methods work in practice. In addition, this paper describes programming techniques required for high performance, including onesided communication, local coordinate numbering, and load repartitioning.
On the relations between ILUs and factored approximate inverses
, 2001
"... This paper discusses some relationships between Incomplete LU (ILU) factorization techniques and factored sparse approximate inverse (AINV) techniques. While ILU factorizations compute approximate LU factors of the coefficient matrix A, AINV techniques aim at building triangular matrices Z and W ..."
Abstract

Cited by 25 (5 self)
 Add to MetaCart
This paper discusses some relationships between Incomplete LU (ILU) factorization techniques and factored sparse approximate inverse (AINV) techniques. While ILU factorizations compute approximate LU factors of the coefficient matrix A, AINV techniques aim at building triangular matrices Z and W such that W AZ is approximately diagonal. The paper shows that certain forms of approximate inverse techniques amount to approximately inverting the triangular factors obtained from some variants of incomplete LU factorization of the original matrix. A few useful, and already known, applications of these relationships will be overviewed.
A Robust Incomplete Factorization Preconditioner for Positive Definite Matrices
, 2001
"... this paper we introduce a preconditioner that strikes a compromise between these two extremes ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
this paper we introduce a preconditioner that strikes a compromise between these two extremes
Preconditioning KKT Systems
, 2002
"... This research presents new preconditioners for linear systems. We proceed from the most general case to the very specific problem area of sparse optimal control. In the first most general approach, we assume only that the coefficient matrix is nonsingular. We target highly indefinite, nonsymmetric p ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
This research presents new preconditioners for linear systems. We proceed from the most general case to the very specific problem area of sparse optimal control. In the first most general approach, we assume only that the coefficient matrix is nonsingular. We target highly indefinite, nonsymmetric problems that cause difficulties for preconditioned iterative solvers, and where standard preconditioners, like incomplete factorizations, often fail. We experiment with nonsymmetric permutations and scalings aimed at placing large entries on the diagonal in the context of preconditioning for general sparse matrices. Our numerical experiments indicate that the reliability and performance of preconditioned iterative solvers are greatly enhanced by such preprocessing. Secondly, we present two new preconditioners for KKT systems. KKT systems arise in areas such as quadratic programming, sparse optimal control, and mixed finite element formulations. Our preconditioners approximate a constraint preconditioner with incomplete factorizations for the normal equations. Numerical experiments compare these two preconditioners with exact constraint preconditioning and the approach described above of permuting large entries to the diagonal. Finally, we turn to a specific problem area: sparse optimal control. Many optimal control problems are broken into several phases, and within a phase, most variables and constraints depend only on nearby variables and constraints. However, free initial and final times and timeindependent parameters impact variables and constraints throughout a phase, resulting in dense factored blocks in the KKT matrix. We drop fill due to these variables to reduce density within each phase. The resulting preconditioner is tightly banded and nearly block tridiagonal. Numerical experiments demonstrate that the preconditioners are effective, with very little fill in the factorization.
An Assessment of Some Preconditioning Techniques in Shell Problems
 COMM. NUMER. METHODS ENGRG
, 1998
"... ..."
(Show Context)
Approximate inverse preconditioning for shifted linear systems
 BIT
, 2003
"... In this paper we consider the problem of preconditioning symmetric positive definite matrices of the form Aα = A + αI where α>0. We discuss how to cheaplymodifyan existing sparse approximate inverse preconditioner for A in order to obtain a preconditioner for Aα. Numerical experiments illustratin ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
(Show Context)
In this paper we consider the problem of preconditioning symmetric positive definite matrices of the form Aα = A + αI where α>0. We discuss how to cheaplymodifyan existing sparse approximate inverse preconditioner for A in order to obtain a preconditioner for Aα. Numerical experiments illustrating the performance of the proposed approaches are presented.
A robust preconditioner with low memory requirements for large sparse least squares problems
 SIAM J. Sci. Comput
, 2002
"... Abstract. This paper describes a technique for constructing robust preconditioners for the CGLS method applied to the solution oflarge and sparse least squares problems. The algorithm computes an incomplete LDL T factorization of the normal equations matrix without the need to form the normal matrix ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
(Show Context)
Abstract. This paper describes a technique for constructing robust preconditioners for the CGLS method applied to the solution oflarge and sparse least squares problems. The algorithm computes an incomplete LDL T factorization of the normal equations matrix without the need to form the normal matrix itself. The preconditioner is reliable (pivot breakdowns cannot occur) and has low intermediate storage requirements. Numerical experiments illustrating the performance of the preconditioner are presented. A comparison with incomplete QR preconditioners is also included.
Partitioning sparse matrices for parallel preconditioned iterative methods
 SIAM Journal on Scientific Computing
, 2004
"... Abstract. This paper addresses the parallelization of the preconditioned iterative methods that use explicit preconditioners such as approximate inverses. Parallelizing a full step of these methods requires the coefficient and preconditioner matrices to be well partitioned. We first show that differ ..."
Abstract

Cited by 14 (9 self)
 Add to MetaCart
(Show Context)
Abstract. This paper addresses the parallelization of the preconditioned iterative methods that use explicit preconditioners such as approximate inverses. Parallelizing a full step of these methods requires the coefficient and preconditioner matrices to be well partitioned. We first show that different methods impose different partitioning requirements for the matrices. Then we develop hypergraph models to meet those requirements. In particular, we develop models that enable us to obtain partitionings on the coefficient and preconditioner matrices simultaneously. Experiments on a set of unsymmetric sparse matrices show that the proposed models yield effective partitioning results. A parallel implementation of the right preconditioned BiCGStab method on a PC cluster verifies that the theoretical gains obtained by the models hold in practice.