• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Preconditioning techniques for large linear systems: A survey (0)

by M Benzi
Venue:J. Comput. Phys
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 194
Next 10 →

Numerical solution of saddle point problems

by Michele Benzi, Gene H. Golub, Jörg Liesen - ACTA NUMERICA , 2005
"... Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has b ..."
Abstract - Cited by 322 (25 self) - Add to MetaCart
Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has been a surge of interest in saddle point problems, and numerous solution techniques have been proposed for solving this type of systems. The aim of this paper is to present and discuss a large selection of solution methods for linear systems in saddle point form, with an emphasis on iterative methods for large and sparse problems.

Jacobian-free Newton-Krylov methods: a survey of approaches and applications

by Dana A. Knoll, David E. Keyes - J. Comput. Phys
"... Jacobian-free Newton-Krylov (JFNK) methods are synergistic combinations of Newton-type methods for superlinearly convergent solution of nonlinear equa-tions and Krylov subspace methods for solving the Newton correction equations. The link between the two methods is the Jacobian-vector product, which ..."
Abstract - Cited by 204 (6 self) - Add to MetaCart
Jacobian-free Newton-Krylov (JFNK) methods are synergistic combinations of Newton-type methods for superlinearly convergent solution of nonlinear equa-tions and Krylov subspace methods for solving the Newton correction equations. The link between the two methods is the Jacobian-vector product, which may be probed approximately without forming and storing the elements of the true Jacobian, through a variety of means. Various approximations to the Jacobian matrix may still be required for preconditioning the resulting Krylov iteration. As with Krylov methods for linear problems, successful application of the JFNK method to any given problem is dependent on adequate preconditioning. JFNK has potential for application throughout problems governed by nonlinear partial dierential equations and integro-dierential equations. In this survey article we place JFNK in context with other nonlinear solution algorithms for both bound-ary value problems (BVPs) and initial value problems (IVPs). We provide an overview of the mechanics of JFNK and attempt to illustrate the wide variety of preconditioning options available. It is emphasized that JFNK can be wrapped (as an accelerator) around another nonlinear xed point method (interpreted as a preconditioning process, potentially with signicant code reuse). The aim of this article is not to trace fully the evolution of JFNK, nor to provide proofs of accuracy or optimal convergence for all of the constituent methods, but rather to present the reader with a perspective on how JFNK may be applicable to problems of physical interest and to provide sources of further practical information. A review paper solicited by the Editor-in-Chief of the Journal of Computational
(Show Context)

Citation Context

...ors normalized and two matrix-vector products are required per iteration. However, these methods enjoy a short recursion relation, so there is no requirement to store many Krylov vectors. We refer to =-=[9, 11, 73, 143]-=- for more details on Krylov methods, and for preconditioning for linear problems. We also call attention to the delightful article [121], which shows that there is no universal ranking possible for it...

Recent computational developments in Krylov subspace methods for linear systems

by Valeria Simoncini, Daniel B. Szyld - NUMER. LINEAR ALGEBRA APPL , 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract - Cited by 85 (12 self) - Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.

Randomized Matrix Computations

by Victor Y. Pan, Guoliang Qian, Ai-long Zheng , 2012
"... We propose new effective randomized algorithms for some fundamental matrix computations such as preconditioning of an ill conditioned matrix that has a small numerical nullity or rank, its 2-by-2 block triangulation, numerical stabilization of Gaussian elimination with no pivoting, and approximation ..."
Abstract - Cited by 49 (5 self) - Add to MetaCart
We propose new effective randomized algorithms for some fundamental matrix computations such as preconditioning of an ill conditioned matrix that has a small numerical nullity or rank, its 2-by-2 block triangulation, numerical stabilization of Gaussian elimination with no pivoting, and approximation of a matrix by low-rank matrices and by structured matrices. Our technical advances include estimating the condition number of a random Toeplitz matrix, novel techniques of randomized preprocessing, a proof of their preconditioning power, and a dual version of the Sherman–Morrison–Woodbury formula. According to both our formal study and numerical tests we significantly accelerate the known algorithms and improve their output accuracy.
(Show Context)

Citation Context

...ioned homogeneous Toeplitz linear systems. 9 Related work, our technical novelties, and further study Preconditioned iterative algorithms for linear systems of equations is a classical subject [A94], =-=[B02]-=-, [G97]. The problem of creating inexpensive preconditioners for general use has been around for a long while as well. On some earlier study of conditioning of random matrices see [D88], [E88], [ES05]...

Block Preconditioners Based on Approximate Commutators

by Howard Elman, Victoria E. Howle, John Shadid, Robert Shuttleworth, Ray Tuminaro - SIAM J. SCI. COMPUT , 2006
"... This paper introduces a strategy for automatically generating a block preconditioner for solving the incompressible Navier-Stokes equations. We consider the "pressure convection-diffusion preconditioners" proposed by Kay, Loghin, and Wathen [11] and Silvester, Elman, Kay, and Wathen [16]. ..."
Abstract - Cited by 38 (11 self) - Add to MetaCart
This paper introduces a strategy for automatically generating a block preconditioner for solving the incompressible Navier-Stokes equations. We consider the "pressure convection-diffusion preconditioners" proposed by Kay, Loghin, and Wathen [11] and Silvester, Elman, Kay, and Wathen [16]. Numerous theoretical and numerical studies have demonstrated mesh independent convergence on several problems and the overall e#cacy of this methodology. A drawback, however, is that it requires the construction of a convection-diffusion operator (denoted Fp ) projected onto the discrete pressure space. This means that integration of this idea into a code that models incompressible flow requires a sophisticated understanding of the discretization and other implementation issues, something often held only by the developers of the model. As an alternative, we consider automatic ways of computing Fp based on purely algebraic considerations. The new methods are closely related to the "BFBt preconditioner" of Elman [6]. We use the fact that the preconditioner is derived from considerations of commutativity between the gradient and convection-diffusion operators, together with methods for computing sparse approximate inverses, to generate the required matrix Fp automatically. We demonstrate that with this strategy, the favorable convergence properties of the preconditioning methodology are retained.

Encapsulating Multiple Communication-Cost Metrics in Partitioning Sparse Rectangular Matrices for Parallel Matrix-Vector Multiplies

by Bora Ucar, Cevdet Aykanat
"... This paper addresses the problem of one-dimensional partitioning of structurally unsymmetricsquare and rectangular sparse matrices for parallel matrix-vector and matrix-transposevector multiplies. The objective is to minimize the communication cost while maintaining the balance on computational load ..."
Abstract - Cited by 37 (22 self) - Add to MetaCart
This paper addresses the problem of one-dimensional partitioning of structurally unsymmetricsquare and rectangular sparse matrices for parallel matrix-vector and matrix-transposevector multiplies. The objective is to minimize the communication cost while maintaining the balance on computational loads of processors. Most of the existing partitioning models consider only the total message volume hoping that minimizing this communication-cost metric is likely to reduce other metrics. However, the total message latency (start-up time) may be more important than the total message volume. Furthermore, the maximum message volume and latency handled by a single processor are also important metrics. We propose a two-phase approach that encapsulates all these four communication-cost metrics. The objective in the first phase is to minimize the total message volume while maintainingthe computational-load balance. The objective in the second phase is to encapsulate the remaining three communication-cost metrics. We propose communicationhypergraph and partitioning models for the second phase. We then present several methods for partitioning communication hypergraphs. Experiments on a wide range of test matrices show that the proposed approach yields very effective partitioning results. A parallel implementation on a PC cluster verifies that the theoretical improvements shown by partitioning results hold in practice.
(Show Context)

Citation Context

...he respective matrix-vector multiplies (see [21] for such a method). The most notable cases are the preconditioned iterative methods that use an explicit preconditioner such as an approximate inverse =-=[3, 4, 16]-=- M ≈ A −1 . These methods involve matrix-vector multiplies with M and A. The present work can be used in such cases by partitioning matrices independently. However, this approach would suffer from com...

A comparison of preconditioners for incompressible Navier–Stokes solvers

by M. Ur Rehman, C. Vuik, G. Segal - International Journal for Numerical Methods in Fluids 2008; 57:1731–1751. DOI: 10.1002/fld.1684
"... We consider solution methods for large systems of linear equations that arise from the finite element discretization of the incompressible Navier–Stokes equations. These systems are of the so-called saddle point type, which means that there is a large block of zeros on the main diagonal. To solve th ..."
Abstract - Cited by 22 (10 self) - Add to MetaCart
We consider solution methods for large systems of linear equations that arise from the finite element discretization of the incompressible Navier–Stokes equations. These systems are of the so-called saddle point type, which means that there is a large block of zeros on the main diagonal. To solve these types of systems efficiently, several block preconditioners have been published. These types of preconditioners require adaptation of standard finite element packages. The alternative is to apply a standard ILU preconditioner in combination with a suitable renumbering of unknowns. We introduce a reordering technique for the degrees of freedom that makes the application of ILU relatively fast. We compare the performance of this technique with some block preconditioners. The performance appears to depend on grid size, Reynolds number and quality of the mesh. For medium-sized problems, which are of practical interest, we show that the reordering technique is competitive with the block preconditioners. Its simple implementation makes it worthwhile to implement it in the standard finite element method software. Copyright q 2007
(Show Context)

Citation Context

...e approximate inverse, ˆS −1 , is replaced by a simple spectrally equivalent matrix. Various cheap approximations of S −1 have been published recently. For an overview of preconditioners, we refer to =-=[1, 2, 5, 23]-=-. Some of those that are used in combination with the block triangular preconditioner (11) are discussed below. 3.1. Pressure convection diffusion (PCD) A popular approximation to the Schur complement...

Globalization Techniques for Newton-€“Krylov Methods and Applications to the Fully-Coupled SOLUTION OF THE NAVIER–STOKES EQUATIONS

by Roger P. Pawlowski, John N. Shadid, Joseph P. Simonis, Homer F. Walker
"... A Newtonâ€-Krylov method is an implementation of Newton'€™s method in which a Krylov subspace method is used to solve approximately the linear subproblems that determine Newton steps. To enhance robustness when good initial approximate solutions are not available, these methods are usually glob ..."
Abstract - Cited by 18 (5 self) - Add to MetaCart
A Newtonâ€-Krylov method is an implementation of Newton'€™s method in which a Krylov subspace method is used to solve approximately the linear subproblems that determine Newton steps. To enhance robustness when good initial approximate solutions are not available, these methods are usually globalized, i.e., augmented with auxiliary procedures (globalizations) that improve the likelihood of convergence from a starting point that is not near a solution. In recent years, globalized Newton-€“Krylov methods have been used increasingly for the fully coupled solution of large-scale problems. In this paper, we review several representative globalizations, discuss their properties, and report on a numerical study aimed at evaluating their relative merits on large-scale two- and three-dimensional problems involving the steady-state Navier-€“Stokes equations.

A Taxonomy and Comparison of Parallel Block . . .

by Howard Elman, Victoria E. Howle, John Shadid, Robert Shuttleworth, Ray Tuminaro , 2007
"... ..."
Abstract - Cited by 18 (2 self) - Add to MetaCart
Abstract not found

DECAY PROPERTIES OF SPECTRAL PROJECTORS WITH APPLICATIONS TO ELECTRONIC STRUCTURE

by Michele Benzi, Paola Boito, Nader Razouk , 2010
"... Motivated by applications in quantum chemistry and solid state physics, we apply general results from approximation theory and matrix analysis to the study of the decay properties of spectral projectors associated with large and sparse Hermitian matrices. Our theory leads to a rigorous proof of the ..."
Abstract - Cited by 16 (2 self) - Add to MetaCart
Motivated by applications in quantum chemistry and solid state physics, we apply general results from approximation theory and matrix analysis to the study of the decay properties of spectral projectors associated with large and sparse Hermitian matrices. Our theory leads to a rigorous proof of the exponential off-diagonal decay (‘nearsightedness’) for the density matrix of gapped systems at zero electronic temperature in both orthogonal and non-orthogonal representations, thus providing a firm theoretical basis for the possibility of linear scaling methods in electronic structure calculations for non-metallic systems. Our theory also allows us to treat the case of density matrices for arbitrary systems at finite electronic temperature, including metals. Other possible applications are also discussed.
(Show Context)

Citation Context

...cesses. This is completely analogous to prescribing a sparsity pattern vs. using an adaptive one when computing sparse approximate inverses for use as preconditioners when solving linear systems, see =-=[10]-=-. Most of the O(n) algorithms currently in use consist of iterative schemes producing increasingly accurate approximations to the density matrix. These approximations may correspond to successive term...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University