Results 1 
5 of
5
Deflated and augmented Krylov subspace methods: A framework for deflated . . .
, 2013
"... We present an extension of the framework of Gaul et al. (SIAM J. Matrix Anal. Appl. 34, 495–518 (2013)) for deflated and augmented Krylov subspace methods satisfying a Galerkin condition to more general Petrov–Galerkin conditions. The main goal is to apply the framework also to the biconjugate gra ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
We present an extension of the framework of Gaul et al. (SIAM J. Matrix Anal. Appl. 34, 495–518 (2013)) for deflated and augmented Krylov subspace methods satisfying a Galerkin condition to more general Petrov–Galerkin conditions. The main goal is to apply the framework also to the biconjugate gradient method (BiCG) and some of its generalizations, including BiCGStab approach does not depend on particular recurrences and thus simplifies the derivation of theoretical results. It easily leads to a variety of realizations by specific algorithms. We do not go into algorithmic details, but we show that for every method there are two different approaches for extending it by augmentation and deflation: one that explicitly takes care of the augmentation space in every step, and one that applies the unchanged basic algorithm to a projected problem but requires a correction step at the end. Both typically generate a Krylov space for a singular operator that is associated with the projected problem. The deflated biconjugate gradient requires two such Krylov spaces, but it also allows us to solve two dual linear systems at once. Deflated Lanczostype product methods fit in our new framework too. The question of how to extract the augmentation and deflation subspace is not addressed here.
RECYCLING BICG WITH AN APPLICATION TO MODEL REDUCTION
, 2012
"... Science and engineering problems frequently require solving a sequence of dual linear systems. Besides having to store only a few Lanczos vectors, using the biconjugate gradient method (BiCG) to solve dual linear systems has advantages for specific applications. For example, using BiCG to solve the ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
(Show Context)
Science and engineering problems frequently require solving a sequence of dual linear systems. Besides having to store only a few Lanczos vectors, using the biconjugate gradient method (BiCG) to solve dual linear systems has advantages for specific applications. For example, using BiCG to solve the dual linear systems arising in interpolatory model reduction provides a backward error formulation in the model reduction framework. Using BiCG to evaluate bilinear forms—for example, in quantum Monte Carlo (QMC) methods for electronic structure calculations—leads to a quadratic error bound. Since our focus is on sequences of dual linear systems, we introduce recycling BiCG, a BiCG method that recycles two Krylov subspaces from one pair of dual linear systems to the next pair. The derivation of recycling BiCG also builds the foundation for developing recycling variants of other biLanczos based methods, such as CGS, BiCGSTAB, QMR, and TFQMR. We develop an augmented biLanczos algorithm and a modified twoterm recurrence to include recycling in the iteration. The recycle spaces are approximate left and right invariant subspaces corresponding to the eigenvalues closest to the origin. These recycle spaces are found by solving a small generalized eigenvalue problem alongside the dual linear systems being solved in the sequence. We test our algorithm in two application areas. First, we solve a discretized partial differential equation (PDE) of convectiondiffusion type. Such a problem provides wellknown test cases that are easy to test and analyze further. Second, we use recycling BiCG in the iterative rational Krylov algorithm (IRKA) for interpolatory model reduction. IRKA requires solving sequences of slowly changing dual linear systems. We analyze the generated recycle spaces and show up to 70 % savings in iterations. For our model reduction test problem, we show that solving the problem without recycling leads to (about) a 50 % increase in runtime.
Spectral deflation in Krylov solvers: A theory of coordinate space based methods
 ETNA
"... Abstract. For the iterative solution of large sparse linear systems we develop a theory for a family of augmented and deflated Krylov space solvers that are coordinate based in the sense that the given problem is transformed into one that is formulated in terms of the coordinates with respect to the ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Abstract. For the iterative solution of large sparse linear systems we develop a theory for a family of augmented and deflated Krylov space solvers that are coordinate based in the sense that the given problem is transformed into one that is formulated in terms of the coordinates with respect to the augmented bases of the Krylov subspaces. Except for the augmentation, the basis is as usual generated by an Arnoldi or Lanczos process, but now with a deflated, singular matrix. The idea behind deflation is to explicitly annihilate certain eigenvalues of the system matrix, typically eigenvalues of small absolute value. The deflation of the matrix is based on an either orthogonal or oblique projection on a subspace that is complimentary to the deflated approximately invariant subspace. While an orthogonal projection allows us to find minimal residual norm solutions, the oblique projections, which we favor when the matrix is nonHermitian, allow us in the case of an exactly invariant subspace to correctly deflate both the right and the corresponding left (possibly generalized) eigenspaces of the matrix, so that convergence only depends on the nondeflated eigenspaces. The minimality of the residual is replaced by the minimality of a quasiresidual. Among the methods that we treat are primarily deflated versions of GMRES, MINRES, and QMR, but we also extend our approach to deflated, coordinate space based versions of other Krylov space methods including variants of CG and BICG. Numerical results will be published elsewhere.
Estimating the Trace of the Matrix Inverse by Interpolating from the Diagonal of an Approximate Inverse
"... Determining the trace of a matrix that is implicitly available through a function is a computationally challenging task that arises in a number of applications. For the common function of the inverse of a large, sparse matrix, the standard approach is based on a Monte Carlo method which converges s ..."
Abstract
 Add to MetaCart
(Show Context)
Determining the trace of a matrix that is implicitly available through a function is a computationally challenging task that arises in a number of applications. For the common function of the inverse of a large, sparse matrix, the standard approach is based on a Monte Carlo method which converges slowly. We present a different approach by exploiting the pattern correlation between the diagonal of the inverse of the matrix and the diagonal of some approximate inverse that can be computed inexpensively. We leverage various sampling and fitting techniques to fit the diagonal of the approximation to the diagonal of the inverse. Based on a dynamic evaluation of the variance, the proposed method can be used as a variance reduction method for Monte Carlo in some cases. Furthermore, the presented method may serve as a standalone kernel for providing a fast trace estimate with a small number of samples. An extensive set of experiments with various technique combinations demonstrates the effectiveness of our method in some real applications.