Results 1  10
of
31
Krylov Projection Methods For Model Reduction
, 1997
"... This dissertation focuses on efficiently forming reducedorder models for large, linear dynamic systems. Projections onto unions of Krylov subspaces lead to a class of reducedorder models known as rational interpolants. The cornerstone of this dissertation is a collection of theory relating Krylov p ..."
Abstract

Cited by 213 (3 self)
 Add to MetaCart
(Show Context)
This dissertation focuses on efficiently forming reducedorder models for large, linear dynamic systems. Projections onto unions of Krylov subspaces lead to a class of reducedorder models known as rational interpolants. The cornerstone of this dissertation is a collection of theory relating Krylov projection to rational interpolation. Based on this theoretical framework, three algorithms for model reduction are proposed. The first algorithm, dual rational Arnoldi, is a numerically reliable approach involving orthogonal projection matrices. The second, rational Lanczos, is an efficient generalization of existing Lanczosbased methods. The third, rational power Krylov, avoids orthogonalization and is suited for parallel or approximate computations. The performance of the three algorithms is compared via a combination of theory and examples. Independent of the precise algorithm, a host of supporting tools are also developed to form a complete modelreduction package. Techniques for choosing the matching frequencies, estimating the modeling error, insuring the model's stability, treating multipleinput multipleoutput systems, implementing parallelism, and avoiding a need for exact factors of large matrix pencils are all examined to various degrees.
A Spectral Bundle Method for Semidefinite Programming
 SIAM JOURNAL ON OPTIMIZATION
, 1997
"... A central drawback of primaldual interior point methods for semidefinite programs is their lack of ability to exploit problem structure in cost and coefficient matrices. This restricts applicability to problems of small dimension. Typically semidefinite relaxations arising in combinatorial applica ..."
Abstract

Cited by 171 (7 self)
 Add to MetaCart
(Show Context)
A central drawback of primaldual interior point methods for semidefinite programs is their lack of ability to exploit problem structure in cost and coefficient matrices. This restricts applicability to problems of small dimension. Typically semidefinite relaxations arising in combinatorial applications have sparse and well structured cost and coefficient matrices of huge order. We present a method that allows to compute acceptable approximations to the optimal solution of large problems within reasonable time. Semidefinite programming problems with constant trace on the primal feasible set are equivalent to eigenvalue optimization problems. These are convex nonsmooth programming problems and can be solved by bundle methods. We propose replacing the traditional polyhedral cutting plane model constructed from subgradient information by a semidefinite model that is tailored for eigenvalue problems. Convergence follows from the traditional approach but a proof is included for completene...
A JacobiDavidson Iteration Method for Linear Eigenvalue Problems
 SIAM J. Matrix Anal. Appl
, 2000
"... . In this paper we propose a new method for the iterative computation of a few of the extremal eigenvalues of a symmetric matrix and their associated eigenvectors. The method is based on an old and almost unknown method of Jacobi. Jacobi's approach, combined with Davidson's method, leads t ..."
Abstract

Cited by 96 (9 self)
 Add to MetaCart
(Show Context)
. In this paper we propose a new method for the iterative computation of a few of the extremal eigenvalues of a symmetric matrix and their associated eigenvectors. The method is based on an old and almost unknown method of Jacobi. Jacobi's approach, combined with Davidson's method, leads to a new method that has improved convergence properties and that may be used for general matrices. We also propose a variant of the new method that may be useful for the computation of nonextremal eigenvalues as well. Key words. eigenvalues and eigenvectors, Davidson's method, Jacobi iterations, harmonic Ritz values AMS subject classifications. 65F15, 65N25 PII. S0036144599363084 1. Introduction. Suppose we want to compute one or more eigenvalues and their corresponding eigenvectors of the n n matrix A. Several iterative methods are available: Jacobi's diagonalization method [9], [23], the power method [9], the method of Lanczos [13], [23], Arnoldi's method [1], [26], and Davidson's method [4], ...
A geometric theory for preconditioned inverse iteration III: A short and sharp convergence estimate for generalized eigenvalue problems
, 2003
"... In two previous papers by Neymeyr [Linear Algebra Appl. 322 (13) (2001) 61; 322 (1 3) (2001) 87], a sharp, but cumbersome, convergence rate estimate was proved for a simple preconditioned eigensolver, which computes the smallest eigenvalue together with the corresponding eigenvector of a symmetr ..."
Abstract

Cited by 49 (13 self)
 Add to MetaCart
In two previous papers by Neymeyr [Linear Algebra Appl. 322 (13) (2001) 61; 322 (1 3) (2001) 87], a sharp, but cumbersome, convergence rate estimate was proved for a simple preconditioned eigensolver, which computes the smallest eigenvalue together with the corresponding eigenvector of a symmetric positive definite matrix, using a preconditioned gradient minimization of the Rayleigh quotient. In the present paper, we discover and prove a much shorter and more elegant (but still sharp in decisive quantities) convergence rate estimate of the same method that also holds for a generalized symmetric definite eigenvalue problem. The new estimate is simple enough to stimulate a search for a more straightforward proof technique that could be helpful to investigate such a practically important method as the locally optimal block preconditioned conjugate gradient eigensolver.
A subspace preconditioning algorithm for eigenvector/eigenvalue computation
 Adv. Comput. Math
, 1996
"... We consider the problem of computing a modest number of the smallest eigenvalues along with orthogonal bases for the corresponding eigenspaces of a symmetric positive definite operator A defined on a finite dimensional real Hilbert space V. In our applications, the dimension of V is large and the co ..."
Abstract

Cited by 42 (6 self)
 Add to MetaCart
(Show Context)
We consider the problem of computing a modest number of the smallest eigenvalues along with orthogonal bases for the corresponding eigenspaces of a symmetric positive definite operator A defined on a finite dimensional real Hilbert space V. In our applications, the dimension of V is large and the cost of inverting A is prohibitive. In this paper, we shall develop an effective parallelizable technique for computing these eigenvalues and eigenvectors utilizing subspace iteration and preconditioning for A. Estimates will be provided which show that the preconditioned method converges linearly when used with a uniform preconditioner under the assumption that the approximating subspace is close enough to the span of desired eigenvectors. 1. Introduction. In this paper, we shall be concerned with computing a modest number of the smallest eigenvalues and their corresponding eigenvectors of a large symmetric illconditioned system. More explicitly, let A be a symmetric and positive definite linear operator on a Ndimensional real vector space V with inner product (·, ·)
Matrix transformations for computing rightmost eigenvalues of large sparse nonsymmetric eigenvalue problems
 IMA J. Numer. Anal
, 1996
"... This paper gives an overview of matrix transformations for finding rightmost eigenvalues of Ax = kx and Ax = kBx with A and B real nonsymmetric and B possibly singular. The aim is not to present new material, but to introduce the reader to the application of matrix transformations to the solution o ..."
Abstract

Cited by 33 (7 self)
 Add to MetaCart
This paper gives an overview of matrix transformations for finding rightmost eigenvalues of Ax = kx and Ax = kBx with A and B real nonsymmetric and B possibly singular. The aim is not to present new material, but to introduce the reader to the application of matrix transformations to the solution of largescale eigenvalue problems. The paper explains and discusses the use of Chebyshev polynomials and the shiftinvert and Cayley ^ transforms as matrix transformations for problems that arise from the discretization df partial differential equations. A few other techniques are described. The reliability of iterative methods is also dealt with by introducing the concept of domain of confidence or trust region. This overview gives the reader an idea of the benefits and the drawbacks of several transformation techniques. We also briefly discuss the current software
An inexact inverse iteration for large sparse eigenvalue problems, Numerical Linear Algebra with Applications
, 1997
"... In this paper, we propose an inexact inverse iteration method for the computation of the eigenvalue with the smallest modulus and its associated eigenvector for a large sparse matrix. The linear systems of the traditional inverse iteration are solved with accuracy that depends on the eigenvalue with ..."
Abstract

Cited by 32 (3 self)
 Add to MetaCart
(Show Context)
In this paper, we propose an inexact inverse iteration method for the computation of the eigenvalue with the smallest modulus and its associated eigenvector for a large sparse matrix. The linear systems of the traditional inverse iteration are solved with accuracy that depends on the eigenvalue with the second smallest modulus and iteration numbers. We prove that this approach preserves the linear convergence of inverse iteration. We also propose two practical formulas for the accuracy bound which are used in actual implementation. ©
Inexact Inverse Iterations for the Generalized Eigenvalue Problems
 BIT
, 1999
"... In this paper, we study an inexact inverse iteration with innerouter iterations for solving the generalized eigenvalue problem Ax = Bx; and analyze how the accuracy in the inner iterations affects the convergence of the outer iterations. By considering a special stopping criterion depending on a th ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
(Show Context)
In this paper, we study an inexact inverse iteration with innerouter iterations for solving the generalized eigenvalue problem Ax = Bx; and analyze how the accuracy in the inner iterations affects the convergence of the outer iterations. By considering a special stopping criterion depending on a threshold parameter, we show that the outer iteration converges linearly with the threshold parameter as the convergence rate. We also discuss the total amount of work and asymptotic equivalence between this stopping criterion and a more standard one. Numerical examples are given to illustrate the theoretical results. 1 Introduction The shiftandinvert transformation is a major preconditioning technique for most methods used for solving large matrix eigenvalue problems. It is usually used in combination with an iterative method such as the power method (inverse iterations), the subspace iterations, the Lanczos algorithm and the Arnoldi algorithm (see [3, 9]). It also lies at center of the re...
Harmonic Projection Methods for Large Nonsymmetric Eigenvalue Problems
 NUMER. LINEAR ALGEBRA APPL., 5, 33–55 (1998)
, 1998
"... The problem of finding interior eigenvalues of a large nonsymmetric matrix is examined. A procedure for extracting approximate eigenpairs from a subspace is discussed. It is related to the Rayleigh–Ritz procedure, but is designed for finding interior eigenvalues. Harmonic Ritz values and other appro ..."
Abstract

Cited by 27 (8 self)
 Add to MetaCart
The problem of finding interior eigenvalues of a large nonsymmetric matrix is examined. A procedure for extracting approximate eigenpairs from a subspace is discussed. It is related to the Rayleigh–Ritz procedure, but is designed for finding interior eigenvalues. Harmonic Ritz values and other approximate eigenvalues are generated. This procedure can be applied to the Arnoldi method, to preconditioning methods, and to other methods for nonsymmetric eigenvalue problems that use the Rayleigh–Ritz procedure. The subject of estimating the boundary of the entire spectrum is briefly discussed, and the importance of preconditioning for interior eigenvalue problems is mentioned.