Results 1  10
of
16
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 85 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
Impossibility of fast stable approximation of analytic functions from equispaced samples
 SIAM Rev
"... Abstract. It is shown that no stable procedure for approximating functions from equally spaced samples can converge exponentially for analytic functions. To avoid instability, one must settle for rootexponential convergence. The proof combines a Bernstein inequality of 1912 with an estimate due to ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
(Show Context)
Abstract. It is shown that no stable procedure for approximating functions from equally spaced samples can converge exponentially for analytic functions. To avoid instability, one must settle for rootexponential convergence. The proof combines a Bernstein inequality of 1912 with an estimate due to Coppersmith and Rivlin in 1992.
A KRYLOV METHOD FOR THE DELAY EIGENVALUE PROBLEM
"... Abstract. The Arnoldi method is currently a very popular algorithm to solve largescale eigenvalue problems. The main goal of this paper is to generalize the Arnoldi method to the characteristic equation of a delaydifferential equation (DDE), here called a delay eigenvalue problem (DEP). The DDE ca ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
(Show Context)
Abstract. The Arnoldi method is currently a very popular algorithm to solve largescale eigenvalue problems. The main goal of this paper is to generalize the Arnoldi method to the characteristic equation of a delaydifferential equation (DDE), here called a delay eigenvalue problem (DEP). The DDE can equivalently be expressed with a linear infinite dimensional operator whose eigenvalues are the solutions to the DEP. We derive a new method by applying the Arnoldi method to the generalized eigenvalue problem (GEP) associated with a spectral discretization of the operator and by exploiting the structure. The result is a scheme where we expand a subspace not only in the traditional way done in the Arnoldi method. The subspace vectors are also expanded with one block of rows in each iteration. More importantly, the structure is such that if the Arnoldi method is started in an appropriate way, it has the (somewhat remarkable) property that it is in a sense independent of the number of discretization points. It is mathematically equivalent to an Arnoldi method with an infinite matrix, corresponding to the limit where we have an infinite number of discretization points. We also show an equivalence with the Arnoldi method in an operator setting. It turns out that with an appropriately defined operator over a space equipped with scalar product with respect to which Chebyshev polynomials are orthonormal, the vectors in the Arnoldi iteration can be interpreted as the coefficients in a Chebyshev expansion of a function. The presented method yields the same Hessenberg matrix as the Arnoldi method applied to the operator. 1. Introduction. Consider
An interpolationbased approach to optimal H∞ model reduction
, 2009
"... A model reduction technique that is optimal in the H∞norm has long been pursued due to its theoretical and practical importance. We consider the optimal H ∞ model reduction problem broadly from an interpolationbased approach, and give a method for finding the approximation to a statespace symmetr ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
A model reduction technique that is optimal in the H∞norm has long been pursued due to its theoretical and practical importance. We consider the optimal H ∞ model reduction problem broadly from an interpolationbased approach, and give a method for finding the approximation to a statespace symmetric dynamical system which is optimal over a family of interpolants to the full order system. This family of interpolants has a simple parameterization that simplifies a direct search for the optimal interpolant. Several numerical examples show that the interpolation points satisfying the MeierLuenberger conditions for H2optimal approximations are a good starting point for minimizing the H∞norm of the approximation error. Interpolation points satisfying the MeierLuenberger conditions can be computed iteratively using the IRKA algorithm [12]. We consider the special case of statespace symmetric systems and show that simple sufficient conditions can be derived for minimizing the approximation error when starting from the interpolation points found by the IRKA algorithm. We then explore the relationship between potential theory in the complex plane and the optimal H∞norm interpolation points through several numerical experiments. The results of these experiments suggest that the optimal H ∞ approximation of
A numerical solution of the constrained Energy Problem
, 2004
"... An algorithm is proposed to solve the constrained energy problem from potential theory. Numerical examples are presented, showing the accuracy of the algorithm. The algorithm is also compared with another numerical method for the same problem. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
An algorithm is proposed to solve the constrained energy problem from potential theory. Numerical examples are presented, showing the accuracy of the algorithm. The algorithm is also compared with another numerical method for the same problem.
SHARPNESS IN RATES OF CONVERGENCE FOR THE SYMMETRIC LANCZOS METHOD
"... Abstract. The Lanczos method is often used to solve a large and sparse symmetric matrix eigenvalue problem. There is a wellestablished convergence theory that produces bounds to predict the rates of convergence good for a few extreme eigenpairs. These bounds suggest at least linear convergence in t ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract. The Lanczos method is often used to solve a large and sparse symmetric matrix eigenvalue problem. There is a wellestablished convergence theory that produces bounds to predict the rates of convergence good for a few extreme eigenpairs. These bounds suggest at least linear convergence in terms of the number of Lanczos steps, assuming there are gaps between individual eigenvalues. In practice, often superlinear convergence is observed. The question is “do the existing bounds tell the correct convergence rate in general?”. An affirmative answer is given here for the two extreme eigenvalues by examples whose Lanczos approximations have errors comparable to the error bounds for all Lanczos steps. 1.
Word Clustering and
 Disambiguation Based on Cooccurence Data”. COLINGACL
, 1998
"... The Lanczos method is often used to solve a large and sparse symmetric matrix eigenvalue problem. It is wellknown that the singlevector Lanczos method can only find one copy of any multiple eigenvalue. To compute all or some of the copies of a multiple eigenvalue, one has to use the block Lanczos ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
The Lanczos method is often used to solve a large and sparse symmetric matrix eigenvalue problem. It is wellknown that the singlevector Lanczos method can only find one copy of any multiple eigenvalue. To compute all or some of the copies of a multiple eigenvalue, one has to use the block Lanczos method which is also known to compute clustered eigenvalues much faster than the singlevector Lanczos method. Existing convergence theory due to Saad for the block Lanczos method, however, does not fully reflect this phenomenon due to that the theory was established to bound approximation errors in each individual approximate eigenpairs. It is argued that in the presence of an eigenvalue cluster, the entire approximate eigenspace associated with the cluster should be considered as a whole, instead of each individual approximate eigenvectors, and likewise for approximating the cluster of eigenvalues. In this paper, we obtain error bounds on approximating eigenspaces and eigenvalue clusters. Our bounds are much sharper than the existing ones and expose more realistic rates of convergence of the block Lanczos method towards eigenvalue clusters. Numerical examples are presented to support our claims. Also a possible extension to the generalized eigenvalue problem is outlined.
Convergence of the Block Lanczos Method For Eigenvalue Clusters
, 2013
"... The Lanczos method is often used to solve a large scale symmetric matrix eigenvalue problem. It is wellknown that the singlevector Lanczos method can only find one copy of any multiple eigenvalue and encounters slow convergence towards clustered eigenvalues. On the other hand, the block Lanczos m ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
The Lanczos method is often used to solve a large scale symmetric matrix eigenvalue problem. It is wellknown that the singlevector Lanczos method can only find one copy of any multiple eigenvalue and encounters slow convergence towards clustered eigenvalues. On the other hand, the block Lanczos method can compute all or some of the copies of a multiple eigenvalue and, with a suitable block size, also compute clustered eigenvalues much faster. The existing convergence theory due to Saad for the block Lanczos method, however, does not fully reflect this phenomenon since the theory was established to bound approximation errors in each individual approximate eigenpairs. Here, it is argued that in the presence of an eigenvalue cluster, the entire approximate eigenspace associated with the cluster should be considered as a whole, instead of each individual approximate eigenvectors, and likewise for approximating clusters of eigenvalues. In this paper, we obtain error bounds on approximating eigenspaces and eigenvalue clusters. Our bounds are much sharper than the existing ones and expose true rates of convergence of the block Lanczos method towards eigenvalue clusters. Furthermore, their sharpness is independent of the closeness of eigenvalues within a cluster. Numerical examples are presented to support our claims. Also a possible extension to the generalized eigenvalue problem is outlined.
A spatially adaptive iterative method for a class of nonlinear operator eigenproblems, (submitted to ETNA
, 2012
"... Abstract. We present a new algorithm for the iterative solution of nonlinear operator eigenvalue problems arising from partial differential equations (PDEs). This algorithm combines automatic spatial resolution of linear operators with the infinite Arnoldi method for nonlinear matrix eigenproblems p ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We present a new algorithm for the iterative solution of nonlinear operator eigenvalue problems arising from partial differential equations (PDEs). This algorithm combines automatic spatial resolution of linear operators with the infinite Arnoldi method for nonlinear matrix eigenproblems proposed in [19]. The iterates in this infinite Arnoldi method are functions, and each iteration requires the solution of an inhomogeneous differential equation. This formulation is independent of the spatial representation of the functions, which allows us to employ a dynamic representation with an accuracy of about the level of machine precision at each iteration, similar to what is done in the Chebfun system [3] with its chebop functionality [12], although our function representation is entirely based on coefficients instead of function values. Our approach also allows for nonlinearities in the boundary conditions of the PDE. The algorithm is illustrated with several examples, e.g., the study of eigenvalues of a vibrating string with delayed boundary feedback control. 1. Introduction. PDE