Results 1  10
of
24
Numerical solution of saddle point problems
 ACTA NUMERICA
, 2005
"... Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has b ..."
Abstract

Cited by 322 (25 self)
 Add to MetaCart
(Show Context)
Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has been a surge of interest in saddle point problems, and numerous solution techniques have been proposed for solving this type of systems. The aim of this paper is to present and discuss a large selection of solution methods for linear systems in saddle point form, with an emphasis on iterative methods for large and sparse problems.
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 85 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
An Analysis for the DIIS Acceleration Method used in Quantum Chemistry Calculations
, 2010
"... This work features an analysis for the acceleration technique DIIS that is standardly used in most of the important quantum chemistry codes, e.g. in DFT and HartreeFock calculations and in the Coupled Cluster method. Taking up results from [23], we show that for the general nonlinear case, DIIS c ..."
Abstract

Cited by 76 (5 self)
 Add to MetaCart
(Show Context)
This work features an analysis for the acceleration technique DIIS that is standardly used in most of the important quantum chemistry codes, e.g. in DFT and HartreeFock calculations and in the Coupled Cluster method. Taking up results from [23], we show that for the general nonlinear case, DIIS corresponds to a projected quasiNewton/ secant method. For linear systems, we establish connections to the wellknown GMRES solver and transfer according (positive as well as negative) convergence results to DIIS. In particular, we discuss the circumstances under which DIIS exhibits superlinear convergence behaviour. For the general nonlinear case, we then use these results to show that a DIIS step can be interpreted as step of a quasiNewton method in which the Jacobian used in the Newton step is approximated by finite differences and in which the according linear system is solved by a GMRES procedure, and give according convergence estimates.
Which eigenvalues are found by the Lanczos method
 SIAM J. Matrix Anal. Appl
"... Abstract. When discussing the convergence properties of the Lanczos iteration method for the real symmetric eigenvalue problem, Trefethen and Bau noted that the Lanczos method tends to find eigenvalues in regions that have too little charge when compared to an equilibrium distribution. In this paper ..."
Abstract

Cited by 25 (5 self)
 Add to MetaCart
(Show Context)
Abstract. When discussing the convergence properties of the Lanczos iteration method for the real symmetric eigenvalue problem, Trefethen and Bau noted that the Lanczos method tends to find eigenvalues in regions that have too little charge when compared to an equilibrium distribution. In this paper a quantitative version of this rule of thumbis presented. We describe, in an asymptotic sense, the region containing those eigenvalues that are well approximated by the Ritz values. The region depends on the distribution of eigenvalues and on the ratio between the size of the matrix and the number of iterations, and it is characterized by an extremal problem in potential theory which was first considered by Rakhmanov. We give examples showing the connection with the equilibrium distribution. Key words. Ritz values, equilibrium distribution, potential theory
Convergence analysis of Krylov subspace iterations with methods from potential theory
 SIAM Review
"... Abstract. Krylov subspace iterations are among the bestknown and most widely used numerical methods for solving linear systems of equations and for computing eigenvalues of large matrices. These methods are polynomial methods whose convergence behavior is related to the behavior of polynomials on t ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Krylov subspace iterations are among the bestknown and most widely used numerical methods for solving linear systems of equations and for computing eigenvalues of large matrices. These methods are polynomial methods whose convergence behavior is related to the behavior of polynomials on the spectrum of the matrix. This leads to an extremal problem in polynomial approximation theory: how small can a monic polynomial of a given degree be on the spectrum? This survey gives an introduction to a recently developed technique to analyze this extremal problem in the case of symmetric matrices. It is based on global information on the spectrum in the sense that the eigenvalues are assumed to be distributed according to a certain measure. Then depending on the number of iterations, the Lanczos method for the calculation of eigenvalues finds those eigenvalues that lie in a certain region, which is characterized by means of a constrained equilibrium problem from potential theory. The same constrained equilibrium problem also describes the superlinear convergence of conjugate gradients and other iterative methods for solving linear systems. Key words. Krylov subspace iterations, Ritz values, eigenvalue distribution, equilibrium measure, contrained equilibrium, potential theory AMS subject classifications. 15A18, 31A05, 31A15, 65F15 1. Introduction. Krylov
Convergence of the isometric Arnoldi process
, 2003
"... It is well known that the performance of eigenvalue algorithms such as the Lanczos and the Arnoldi method depends on the distribution of eigenvalues. Under fairly general assumptions we characterize the region of good convergence for the Isometric Arnoldi Process. We also determine bounds for the ra ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
It is well known that the performance of eigenvalue algorithms such as the Lanczos and the Arnoldi method depends on the distribution of eigenvalues. Under fairly general assumptions we characterize the region of good convergence for the Isometric Arnoldi Process. We also determine bounds for the rate of convergence and we prove sharpness of these bounds. The distribution of isometric Ritz values is obtained as the minimizer of an extremal problem. We use techniques from logarithmic potential theory in proving these results.
A note on the convergence of Ritz values for sequences of matrices
, 2000
"... While discussing the convergence of the Lanczos method, Trefethen and Bau observed a relationship with electric charge distributions, and claimed that the Lanczos iteration tends to converge to eigenvalues in regions of "too little charge" for an equilibrium distribution. Recently, Kuij ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
While discussing the convergence of the Lanczos method, Trefethen and Bau observed a relationship with electric charge distributions, and claimed that the Lanczos iteration tends to converge to eigenvalues in regions of "too little charge" for an equilibrium distribution. Recently, Kuijlaars found a theoretical justification for this phenomenon by considering the Lanczos method applied to a suitable sequence of matrices with similar spectra which may occur for instance in the discretization of PDEs while varying the parameter of discretization. The aim of the present note is to improve the result of Kuijlaars: we obtain a better rate of convergence under weaker regularity assumptions, and show that this new rate of convergence is sharp. Key words: Lanczos method, Convergence of Ritz values, Logarithmic potential theory. Subject Classifications: AMS(MOS): 65F10, 65E05, 31A99, 41A10. 1 Introduction In order to approximate eigenvalues of large real symmetric matrices A of orde...
A quasiminimal residual method for simultaneous primaldual solutions and superconvergent functional estimates
 SIAM Journal on Scientific Computing
"... Abstract. The adjoint solution has found many uses in computational simulations where the quantities of interest are the functionals of the solution, including design optimization, error estimation, and control. In those applications where both the solution and the adjoint are desired, the conventio ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
Abstract. The adjoint solution has found many uses in computational simulations where the quantities of interest are the functionals of the solution, including design optimization, error estimation, and control. In those applications where both the solution and the adjoint are desired, the conventional approach is to apply iterative methods to solve the primal and dual problems separately. However, we show that there is an advantage associated with iterating the primal and dual problem simultaneously since this enables the construction of iterative methods where both the primal and the dual iterates may be chosen so that they provide functional estimates that are “superconvergent” in that the error converges at twice the order of the optimal global solution error norm. In particular, we show that the structure of the Lanczos process allows for this superconvergence property and propose a modified QMR method which uses the same Lanczos process to simultaneously solve the primal and dual problems. Thus both the primal and the dual systems are solved at essentially the same computational cost as the conventional QMR method applied to the primal problem alone. Numerical experiments show that our proposed method does indeed exhibit superconvergence behavior.
On the sharpness of an asymptotic error estimate for Conjugate Gradients
 BIT
, 2000
"... Recently, the authors obtained an upper bound on the error for the conjugate gradient method, which is valid in an asymptotic setting as the size of the linear systems tends to infinity. The estimate depends on the asymptotic distribution of eigenvalues, and the ratio between the size and the number ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
Recently, the authors obtained an upper bound on the error for the conjugate gradient method, which is valid in an asymptotic setting as the size of the linear systems tends to infinity. The estimate depends on the asymptotic distribution of eigenvalues, and the ratio between the size and the number of iterations. Such error bounds are related to the existence of polynomials with value 1 at 0 whose supnorm on the spectrum is as small as possible. A possible strategy for constructing such a polynomial p is to select a set S, to specify that every eigenvalue outside S is a zero of p, and then to minimize the supnorm of p on S. Here we show that this strategy can never give a better asymptotic upper bound than the one we obtained before. We also discuss the case where equality is met.