Results 1  10
of
46
Optimal scaling of generalized and polynomial eigenvalue problems
 SIAM J. Matrix Anal. Appl
"... Abstract. Scaling is a commonly used technique for standard eigenvalue problems to improve the sensitivity of the eigenvalues. In this paper we investigate scaling for generalized and polynomial eigenvalue problems (PEPs) of arbitrary degree. It is shown that an optimal diagonal scaling of a PEP wit ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Scaling is a commonly used technique for standard eigenvalue problems to improve the sensitivity of the eigenvalues. In this paper we investigate scaling for generalized and polynomial eigenvalue problems (PEPs) of arbitrary degree. It is shown that an optimal diagonal scaling of a PEP with respect to an eigenvalue can be described by the ratio of its normwise and componentwise condition number. Furthermore, the effect of linearization on optimally scaled polynomials is investigated. We introduce a generalization of the diagonal scaling by Lemonnier and Van Dooren to PEPs that is especially effective if some information about the magnitude of the wanted eigenvalues is available and also discuss variable transformations of the type λ = αµ for PEPs of arbitrary degree.
Solving rational eigenvalue problems via linearization
, 2008
"... Abstract. Rational eigenvalue problem is an emerging class of nonlinear eigenvalue problems arising from a variety of physical applications. In this paper, we propose a linearizationbased method to solve the rational eigenvalue problem. The proposed method converts the rational eigenvalue problem i ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Rational eigenvalue problem is an emerging class of nonlinear eigenvalue problems arising from a variety of physical applications. In this paper, we propose a linearizationbased method to solve the rational eigenvalue problem. The proposed method converts the rational eigenvalue problem into a wellstudied linear eigenvalue problem, and meanwhile, exploits and preserves the structure and properties of the original rational eigenvalue problem. For example, the lowrank property leads to a trimmed linearization. We show that solving a class of rational eigenvalue problems is just as convenient and efficient as solving linear eigenvalue problems. Key words. Rational eigenvalue problem, linearization, nonlinear eigenvalue problem AMS subject classifications. 65F15, 65F50, 15A18
Fiedler companion linearizations for rectangular matrix polynomials
, 2011
"... The development of new classes of linearizations of square matrix polynomials that generalize the classical first and second Frobenius companion forms has attracted much attention in the last decade. Research in this area has two main goals: finding linearizations that retain whatever structure the ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
The development of new classes of linearizations of square matrix polynomials that generalize the classical first and second Frobenius companion forms has attracted much attention in the last decade. Research in this area has two main goals: finding linearizations that retain whatever structure the original polynomial might possess, and improving properties that are essential for accurate numerical computation, such as eigenvalue condition numbers and backward errors. However, all recent progress on linearizations has been restricted to square matrix polynomials. Since rectangular polynomials arise in many applications, it is natural to investigate if the new classes of linearizations can be extended to rectangular polynomials. In this paper, the family of Fiedler linearizations is extended from square to rectangular matrix polynomials, and it is shown that minimal indices and bases of polynomials can be recovered from those of any linearization in this class via the same simple procedures developed previously for square polynomials. Fiedler linearizations are one of the most important classes of linearizations introduced in recent years, but their generalization to rectangular polynomials is nontrivial, and requires a completely different approach to the one used in the square case. To the best of our knowledge, this is the first class of new linearizations that has been generalized to rectangular polynomials.
Chebyshev interpolation for nonlinear eigenvalue problems
 1–19. A ROBUST FULLY RATIONAL KRYLOV METHOD 23
"... This work is concerned with numerical methods for matrix eigenvalue problems that are nonlinear in the eigenvalue parameter. In particular, we focus on eigenvalue problems for which the evaluation of the matrix valued function is computationally expensive. Such problems arise, e.g., from boundary in ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
This work is concerned with numerical methods for matrix eigenvalue problems that are nonlinear in the eigenvalue parameter. In particular, we focus on eigenvalue problems for which the evaluation of the matrix valued function is computationally expensive. Such problems arise, e.g., from boundary integral formulations of elliptic PDEeigenvalue problems and typically exclude the use of established nonlinear eigenvalue solvers. Instead, we propose the use of polynomial approximation combined with nonmonomial linearizations. Our approach is intended for situations where the eigenvalues of interest are located on the real line or, more generally, on a prespecified curve in the complex plane. A firstorder perturbation analysis for nonlinear eigenvalue problems is performed. Combined with an approximation result for Chebyshev interpolation, this shows exponential convergence of the obtained eigenvalue approximations with respect to the degree of the approximating polynomial. Preliminary numerical experiments demonstrate the viability of the approach in the context of boundary element methods. 1
Perturbation, Extraction and Refinement of Invariant Pairs for Matrix Polynomials
, 2010
"... Generalizing the notion of an eigenvector, invariant subspaces are frequently used in the context of linear eigenvalue problems, leading to conceptually elegant and numerically stable formulations in applications that require the computation of several eigenvalues and/or eigenvectors. Similar benefi ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
Generalizing the notion of an eigenvector, invariant subspaces are frequently used in the context of linear eigenvalue problems, leading to conceptually elegant and numerically stable formulations in applications that require the computation of several eigenvalues and/or eigenvectors. Similar benefits can be expected for polynomial eigenvalue problems, for which the concept of an invariant subspace needs to be replaced by the concept of an invariant pair. Little has been known so far about numerical aspects of such invariant pairs. The aim of this paper is to fill this gap. The behavior of invariant pairs under perturbations of the matrix polynomial is studied and a firstorder perturbation expansion is given. From a computational point of view, we investigate how to best extract invariant pairs from a linearization of the matrix polynomial. Moreover, we describe efficient refinement procedures directly based on the polynomial formulation. Numerical experiments with matrix polynomials from a number of applications demonstrate the effectiveness of our extraction and refinement procedures. 1
Deflating quadratic matrix polynomials with structure preserving transformations. Linear Algebra and its Applications
"... Given a pair of distinct eigenvalues (λ1, λ2) of an n×n quadratic matrix polynomial Q(λ) with nonsingular leading coefficient and their corresponding eigenvectors, we show how to transform Q(λ) into a quadratic of the form Qd(λ) ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
(Show Context)
Given a pair of distinct eigenvalues (λ1, λ2) of an n×n quadratic matrix polynomial Q(λ) with nonsingular leading coefficient and their corresponding eigenvectors, we show how to transform Q(λ) into a quadratic of the form Qd(λ)
Graphical Krein signature theory and Evans–Krein functions
, 2013
"... Two concepts, evidently very different in nature, have proved to be useful in analytical and numerical studies of spectral stability in nonlinear wave theory: (i) the Krein signature of an eigenvalue, a quantity usually defined in terms of the relative orientation of certain subspaces that is capabl ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Two concepts, evidently very different in nature, have proved to be useful in analytical and numerical studies of spectral stability in nonlinear wave theory: (i) the Krein signature of an eigenvalue, a quantity usually defined in terms of the relative orientation of certain subspaces that is capable of detecting the structural instability of imaginary eigenvalues and hence their potential for moving into the right halfplane leading to dynamical instability under perturbation of the system, and (ii) the Evans function, an analytic function detecting the location of eigenvalues. One might expect these two concepts to be related, but unfortunately examples demonstrate that there is no way in general to deduce the Krein signature of an eigenvalue from the Evans function, for example by studying derivatives of the latter. The purpose of this paper is to recall and popularize a simple graphical interpretation of the Krein signature wellknown in the spectral theory of polynomial operator pencils. Once established, this interpretation avoids altogether the need to view the Krein signature in terms of root subspaces and their relation to indefinite quadratic forms. To demonstrate the utility of this graphical interpretation of the Krein signature, we use it to define a simple generalization of the Evans function — the EvansKrein function — that allows the calculation of Krein signatures in a way that is easy to incorporate into
AN IMPROVED ARC ALGORITHM FOR DETECTING DEFINITE HERMITIAN PAIRS ∗
, 2008
"... Abstract. A 25year old and somewhat neglected algorithm of Crawford and Moon attempts to determine whether a given Hermitian matrix pair (A, B) is definite by exploring the range of the function f(x) = x ∗ (A + iB)x/x ∗ (A + iB)x, which is a subset of the unit circle. We revisit the algorithm an ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
Abstract. A 25year old and somewhat neglected algorithm of Crawford and Moon attempts to determine whether a given Hermitian matrix pair (A, B) is definite by exploring the range of the function f(x) = x ∗ (A + iB)x/x ∗ (A + iB)x, which is a subset of the unit circle. We revisit the algorithm and show that with suitable modifications and careful attention to implementation details it provides a reliable and efficient means of testing definiteness. A clearer derivation of the basic algorithm is given that emphasizes an arc expansion viewpoint and makes no assumptions about the definiteness of the pair. Convergence of the algorithm is proved for all (A, B), definite or not. It is shown that proper handling of three details of the algorithm is crucial to the efficiency and reliability: how the midpoint of an arc is computed, whether shrinkage of an arc is permitted, and how directions of negative curvature are computed. For the latter, several variants of Cholesky factorization with complete pivoting are explored and the benefits of pivoting demonstrated. The overall cost of our improved algorithm is typically just a few Cholesky factorizations. Applications of the algorithm are described to testing the hyperbolicity of a Hermitian quadratic matrix polynomial, constructing conjugate gradient methods for sparse linear systems in saddle point form, and computing the Crawford number of the pair (A, B) via a quasiconvex univariate minimization problem.