Results 1  10
of
181
Pseudospectra of linear operators
 SIAM Rev
, 1997
"... Abstract. If a matrix or linear operator A is far from normal, its eigenvalues or, more generally, its spectrum may have little to do with its behavior as measured by quantities such as ‖An ‖ or ‖exp(tA)‖. More may be learned by examining the sets in the complex plane known as the pseudospectra of A ..."
Abstract

Cited by 154 (10 self)
 Add to MetaCart
(Show Context)
Abstract. If a matrix or linear operator A is far from normal, its eigenvalues or, more generally, its spectrum may have little to do with its behavior as measured by quantities such as ‖An ‖ or ‖exp(tA)‖. More may be learned by examining the sets in the complex plane known as the pseudospectra of A, defined by level curves of the norm of the resolvent, ‖(zI − A) −1‖. Five years ago, the author published a paper that presented computed pseudospectra of thirteen highly nonnormal matrices arising in various applications. Since that time, analogous computations have been carried out for differential and integral operators. This paper, a companion to the earlier one, presents ten examples, each chosen to illustrate one or more mathematical or physical principles.
A Block Algorithm for Matrix 1Norm Estimation, with an Application to 1Norm Pseudospectra
 SIAM J. Matrix Anal. Appl
, 2000
"... Abstract. The matrix 1norm estimation algorithm used in LAPACK and various other software libraries and packages has proved to be a valuable tool. However, it has the limitations that it offers the user no control over the accuracy and reliability of the estimate and that it is based on level 2 BLA ..."
Abstract

Cited by 49 (23 self)
 Add to MetaCart
(Show Context)
Abstract. The matrix 1norm estimation algorithm used in LAPACK and various other software libraries and packages has proved to be a valuable tool. However, it has the limitations that it offers the user no control over the accuracy and reliability of the estimate and that it is based on level 2 BLAS operations. A block generalization of the 1norm power method underlying the estimator is derived here and developed into a practical algorithm applicable to both real and complex matrices. The algorithm works with n × t matrices, where t is a parameter. For t = 1 the originalalgorithm is recovered, but with two improvements (one for realmatrices and one for complex matrices). The accuracy and reliability of the estimates generally increase with t and the computationalkernels are level 3 BLAS operations for t>1. The last t−1 columns of the starting matrix are randomly chosen, giving the algorithm a statistical flavor. As a byproduct of our investigations we identify a matrix for which the 1norm power method takes the maximum number of iterations. As an application of the new estimator we show how it can be used to efficiently approximate 1norm pseudospectra.
Calculation Of Pseudospectra By The Arnoldi Iteration
, 1996
"... The Arnoldi iteration, usually viewed as a method for calculating eigenvalues, can also be used to estimate pseudospectra. This possibility may be of practical importance, for in applications involving highly nonnormal matrices or operators, such as hydrodynamic stability, pseudospectra may be phys ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
The Arnoldi iteration, usually viewed as a method for calculating eigenvalues, can also be used to estimate pseudospectra. This possibility may be of practical importance, for in applications involving highly nonnormal matrices or operators, such as hydrodynamic stability, pseudospectra may be physically more significant than spectra.
Structured lowrank approximation and its applications
, 2007
"... Fitting data by a bounded complexity linear model is equivalent to lowrank approximation of a matrix constructed from the data. The data matrix being Hankel structured is equivalent to the existence of a linear timeinvariant system that fits the data and the rank constraint is related to a bound ..."
Abstract

Cited by 32 (13 self)
 Add to MetaCart
Fitting data by a bounded complexity linear model is equivalent to lowrank approximation of a matrix constructed from the data. The data matrix being Hankel structured is equivalent to the existence of a linear timeinvariant system that fits the data and the rank constraint is related to a bound on the model complexity. In the special case of fitting by a static model, the data matrix and its lowrank approximation are unstructured. We outline applications in system theory (approximate realization, model reduction, output error and errorsinvariables identification), signal processing (harmonic retrieval, sumofdamped exponentials and finite impulse response modeling), and computer algebra (approximate common divisor). Algorithms based on the variable projections and alternating projections methods are presented. Generalizations of the lowrank approximation problem result from different approximation criteria (e.g., weighted norm), constraints on the data matrix (e.g., nonnegativity), and data structures (e.g., kernel mapping). Related problems are rank minimization and structured pseudospectra. Keywords: Lowrank approximation, total least squares, system identification, errorsinvariables modeling, behaviors.
Pseudospectra of the convectiondiffusion operator
 SIAM J. Appl. Math
, 1994
"... Abstract. The spectrum of the simplest 1D convectiondiffusion operator is a discrete subset of the negative real axis, but the pseudospectra are regions in the complex plane bounded approximately by parabolas. Put another way, the norm of the resolvent is exponentially large as a function of the Pd ..."
Abstract

Cited by 30 (7 self)
 Add to MetaCart
Abstract. The spectrum of the simplest 1D convectiondiffusion operator is a discrete subset of the negative real axis, but the pseudospectra are regions in the complex plane bounded approximately by parabolas. Put another way, the norm of the resolvent is exponentially large as a function of the Pdclet number throughout a certain approximately parabolic region. These observations have a simple physical basis, and suggest that conventional spectral analysis for convectiondiffusion operators may be of limited value in some applications. Key words, convectiondiffusion operator, Pdclet number, pseudospectra AMS subject classifications.
Network Properties Revealed Through Matrix Functions
, 2008
"... The newly emerging field of Network Science deals with the tasks of modelling, comparing and summarizing large data sets that describe complex interactions. Because pairwise affinity data can be stored in a twodimensional array, graph theory and applied linear algebra provide extremely useful tools. ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
The newly emerging field of Network Science deals with the tasks of modelling, comparing and summarizing large data sets that describe complex interactions. Because pairwise affinity data can be stored in a twodimensional array, graph theory and applied linear algebra provide extremely useful tools. Here, we focus on the general concepts of centrality, communicability and betweenness, each of which quantifies important features in a network. Some recent work in the mathematical physics literature has shown that the exponential of a network’s adjacency matrix can be used as the basis for defining and computing specific versions of these measures. We introduce here a general class of measures based on matrix functions, and show that a particular case involving a matrix resolvent arises naturally from graphtheoretic arguments. We also point out connections between these measures and the quantities typically computed when spectral methods are used for data mining tasks such as clustering and ordering. We finish with computational examples showing the new matrix resolvent version applied to real networks.
Convergence of Restarted Krylov Subspaces to Invariant Subspaces
 SIAM J. Matrix Anal. Appl
, 2001
"... The performance of Krylov subspace eigenvalue algorithms for large matrices can be measured by the angle between a desired invariant subspace and the Krylov subspace. We develop general bounds for this convergence that include the eects of polynomial restarting and impose no restrictions concerning ..."
Abstract

Cited by 28 (4 self)
 Add to MetaCart
The performance of Krylov subspace eigenvalue algorithms for large matrices can be measured by the angle between a desired invariant subspace and the Krylov subspace. We develop general bounds for this convergence that include the eects of polynomial restarting and impose no restrictions concerning the diagonalizability of the matrix or its degree of nonnormality. Associated with a desired set of eigenvalues is a maximum \reachable invariant subspace" that can be developed from the given starting vector. Convergence for this distinguished subspace is bounded in terms involving a polynomial approximation problem. Elementary results from potential theory lead to convergence rate estimates and suggest restarting strategies based on optimal approximation points (e.g., Leja or Chebyshev points); exact shifts are evaluated within this framework. Computational examples illustrate the utility of these results. Origins of superlinear eects are also described.
LowRank Tensor Krylov Subspace Methods for Parametrized Linear Systems
, 2010
"... We consider linear systems A(α)x(α) = b(α) depending on possibly many parameters α = (α1,...,αp). Solving these systems simultaneously for a standard discretization of the parameter space would require a computational effort growing exponentially in the number of parameters. We show that this curse ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
(Show Context)
We consider linear systems A(α)x(α) = b(α) depending on possibly many parameters α = (α1,...,αp). Solving these systems simultaneously for a standard discretization of the parameter space would require a computational effort growing exponentially in the number of parameters. We show that this curse of dimensionality can be avoided for sufficiently smooth parameter dependencies. For this purpose, computational methods are developed that benefit from the fact that x(α) can be well approximated by a tensor of low rank. In particular, lowrank tensor variants of shortrecurrence Krylov subspace methods are presented. Numerical experiments for deterministic PDEs with parametrized coefficients and stochastic elliptic PDEs demonstrate the effectiveness of our approach.