### Table 5.1: Summary of definitions used in Krylov Subspace Methods

2005

### Table 9.1 Summary of Krylov subspace methods discussed in Section 9.

2005

Cited by 26

### Table 9.1. Summary of Krylov subspace methods discussed in Section 9.

### Table 4.1 Different Matlab implementations of various Krylov subspace methods from SOL [97] and The Math- Works.

2006

### Table 5.1 Null vectors from various Krylov subspace methods using the inverse-iteration and matrix-transpose ap- proaches.

2006

### Table 2. Comparison between the waveform Krylov subspace methods applied to the ring modulator circuit. The method WCGS WBiCG WBiCG-Stab WGMRES(30) WQMR No. of operator-function products 418 742 252 600 738

"... In PAGE 3: ...1 sec. The results are shown in Table2 . The waveform Jacobi method failed to converge for this circuit.... ..."

### Table 7.1 lists the number of iterations required with the subspace GMRES algo- rithm to reduce the error by 10?6 for di erent meshsize h. The number of iterations slightly increases like a power of the logarithm of the mesh-size. For the restarted GMRES algorithm this is what the theory in [3] would actually predict. 8. Concluding Remarks. In this paper, we have developed the concept of sub- space orthogonalization variants of Krylov subspace methods. In these methods the storage of basis elements, and the computation of inner products and vector updates is restricted to a subspace of the unknowns. In the context of iterative substructuring methods this allows the combination of working only with the edge and vertex unknowns and the use of inexact subdomain solves. Our convergence analysis and numerical re- sults indicate that, in particular, the subspace GMRES algorithm compares favourably with other commonly used Krylov subspace methods for substructuring preconditioned 14

in Subspace Orthogonalization For Substructuring Preconditioners For Nonselfadjoint Elliptic Problems

1994

Cited by 1

### Table 3: Scaled speedups, and M ops. The GMRES method uses a Krylov subspace size of 64 before restarting

"... In PAGE 12: ... 3.1 Experimental results The performance displayed in Table3 corresponds to the solution of a 2D convection- di usion problem discretized using nite di erences [23]. A 1024 processor nCUBE 2 hypercube was used for all of the tests.... In PAGE 12: ... For this reason it is often possible to get good speedups on the ncube as the communication speeds are fast enough (relative to the computation speeds) so that the machine does not have to spend a lot of time waiting for messages compared to the time that it computes. In Table3 , (for n = 512 512 and n = 1024 1024) and M ops rate are depicted for the di erent preconditioners using each nonsymmetric Krylov methods. Due to the large number of nodes and the modest size of local memory, only scaled speedup are used to evaluate the performance of the parallel implementations.... ..."

### Table 2. Iteration counts for Galerkin discretization with block diagonal preconditioner. The same observations are appropriate in the case of the block triangular preconditioner (2.12), see Table 3. An interesting feature here is that for both Krylov subspace methods, the number of iterations is roughly halved when (2.12) is used in place of (2.5). The cost per step of the block triangular preconditioner is only slightly higher than that of the block diagonal preconditioner; only an extra multiplication by Bt is needed. Thus, the triangular method (2.12) is more e ective.

1996

"... In PAGE 12: ... The iteration counts using the block diagonal preconditioner (2.5) are shown in Table2 . For GM- RES(10), the residual norm is computed only every 10 steps.... ..."

Cited by 52

### Table 1. Eigenvalues of the Schur complement, for streamline-upwind discretization. We test the preconditioners here with two Krylov subspace methods for solving nonsym- metric systems: the generalized minimum residual method (GMRES) [15], and a simple im- plementation of the quasi-minimum residual method (QMR) [8] based on coupled two-term recurrences without look-ahead. GMRES demonstrates the performance of the preconditioners with the optimal (with respect to the residual norm) Krylov subspace solver. This method is impractical for large problems because its work and storage requirements grow with the iteration count; QMR is a non-optimal alternative that avoids this di culty. Some additional experi- ments with restarted GMRES are presented in Section 4. In all cases we use right-oriented preconditioning, and our convergence criterion is a reduction of 10?6 in the l2{norm of the residual. The action of F?1 and F?t is computed using the LU-factorization in MATLAB. We start from a zero initial guess. Using random initial guesses gave comparable iteration counts 10

"... In PAGE 12: ... We rst consider the bounds of Theorem 1. Table1 shows the extreme real parts and max- imum imaginary parts of the generalized eigenvalues (2.7) of the Schur complement operator, for = 1=10 and 1=100 with the streamline-upwind discretization, on three meshes.... In PAGE 12: ... The analysis also shows that the real parts and largest imaginary parts of the eigenvalues are bounded independently of ; the bound for the smallest real part is proportional to 2. The data of Table1 are in agreement with the upper bounds. Figure 5 plots the smallest real parts on a logarithmic scale, for the streamline-upwind discretization on a 64 64 grid and = 1=20, 1=40, 1=80, and 1=160.... ..."