Results 1  10
of
23
The conditioning of linearizations of matrix polynomials
 Manchester Institute for Mathematical Sciences, The University of Manchester
, 2005
"... Abstract. The standard way of solving the polynomial eigenvalue problem of degree m in n×n matrices is to “linearize ” to a pencil in mn×mn matrices and solve the generalized eigenvalue problem. For a given polynomial, P, infinitely many linearizations exist and they can have widely varying eigenva ..."
Abstract

Cited by 54 (21 self)
 Add to MetaCart
(Show Context)
Abstract. The standard way of solving the polynomial eigenvalue problem of degree m in n×n matrices is to “linearize ” to a pencil in mn×mn matrices and solve the generalized eigenvalue problem. For a given polynomial, P, infinitely many linearizations exist and they can have widely varying eigenvalue condition numbers. We investigate the conditioning of linearizations from a vector space DL(P) of pencils recently identified and studied by Mackey, Mackey, Mehl, and Mehrmann. We look for the best conditioned linearization and compare the conditioning with that of the original polynomial. Two particular pencils are shown always to be almost optimal over linearizations in DL(P) for eigenvalues of modulus greater than or less than 1, respectively, provided that the problem is not too badly scaled and that the pencils are linearizations. Moreover, under this scaling assumption, these pencils are shown to be about as well conditioned as the original polynomial. For quadratic eigenvalue problems that are not too heavily damped, a simple scaling is shown to convert the problem to one that is well scaled. We also analyze the eigenvalue conditioning of the widely used first and second companion linearizations. The conditioning of the first companion linearization relative to that of P is shown to depend on the coefficient matrix norms, the eigenvalue, and the left eigenvectors of the linearization and of P. The companion form is found to be potentially much more
Backward error of polynomial eigenproblems solved by linearization
 MANCHESTER INSTITUTE FOR MATHEMATICAL SCIENCES, THE UNIVERSITY OF MANCHESTER
, 2006
"... The most widely used approach for solving the polynomial eigenvalue problem P(λ)x = ��m i=0 λi � Ai x =0inn × n matrices Ai is to linearize to produce a larger order pencil L(λ) =λX + Y, whose eigensystem is then found by any method for generalized eigenproblems. For a given polynomial P, infinite ..."
Abstract

Cited by 43 (11 self)
 Add to MetaCart
The most widely used approach for solving the polynomial eigenvalue problem P(λ)x = ��m i=0 λi � Ai x =0inn × n matrices Ai is to linearize to produce a larger order pencil L(λ) =λX + Y, whose eigensystem is then found by any method for generalized eigenproblems. For a given polynomial P, infinitely many linearizations L exist and approximate eigenpairs of P computed via linearization can have widely varying backward errors. We show that if a certain onesided factorization relating L to P can be found then a simple formula permits recovery of right eigenvectors of P from those of L, and the backward error of an approximate eigenpair of P can be bounded in terms of the backward error for the corresponding approximate eigenpair of L. A similar factorization has the same implications for left eigenvectors. We use this technique to derive backward error bounds depending only on the norms of the Ai for the companion pencils and for the vector space DL(P) of pencils recently identified by Mackey, Mackey, Mehl, and Mehrmann. In all cases, sufficient conditions are identified for an optimal backward error for P. These results are shown to be entirely consistent with those of Higham, Mackey, and Tisseur on the conditioning of linearizations of P. Other contributions of this work are a block scaling of the companion pencils
Scaling, sensitivity and stability in the numerical solution of quadratic eigenvalue problems
 Internat. J. Numer. Methods Eng
, 2006
"... The most common way of solving the quadratic eigenvalue problem (QEP) (λ 2 M +λD+K)x = 0 is to convert it into a linear problem (λX +Y)z = 0 of twice the dimension and solve the linear problem by the QZ algorithm or a Krylov method. In doing so, it is important to understand the influence of the lin ..."
Abstract

Cited by 22 (9 self)
 Add to MetaCart
(Show Context)
The most common way of solving the quadratic eigenvalue problem (QEP) (λ 2 M +λD+K)x = 0 is to convert it into a linear problem (λX +Y)z = 0 of twice the dimension and solve the linear problem by the QZ algorithm or a Krylov method. In doing so, it is important to understand the influence of the linearization process on the accuracy and stability of the computed solution. We discuss these issues for three particular linearizations: the standard companion linearization and two linearizations that preserve symmetry in the problem. For illustration we employ a model QEP describing the motion of a beam simply supported at both ends and damped at the midpoint. We show that the above linearizations lead to poor numerical results for the beam problem, but that a twoparameter scaling proposed by Fan, Lin and Van Dooren cures the instabilities. We also show that half of the eigenvalues of the beam QEP are pure imaginary and are eigenvalues of the undamped problem. Our analysis makes use of recently developed theory explaining the sensitivity and stability of linearizations, the main conclusions of which are summarized. As well as arguing that scaling should routinely be used, we give guidance on how to choose a linearization and illustrate the practical value of condition numbers and backward errors. key words: quadratic eigenvalue problem, sensitivity, condition number, backward error, stability,
Detecting and solving hyperbolic quadratic eigenvalue problems
, 2007
"... Reports available from: And by contacting: ..."
(Show Context)
Optimal scaling of generalized and polynomial eigenvalue problems
 SIAM J. Matrix Anal. Appl
"... Abstract. Scaling is a commonly used technique for standard eigenvalue problems to improve the sensitivity of the eigenvalues. In this paper we investigate scaling for generalized and polynomial eigenvalue problems (PEPs) of arbitrary degree. It is shown that an optimal diagonal scaling of a PEP wit ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Scaling is a commonly used technique for standard eigenvalue problems to improve the sensitivity of the eigenvalues. In this paper we investigate scaling for generalized and polynomial eigenvalue problems (PEPs) of arbitrary degree. It is shown that an optimal diagonal scaling of a PEP with respect to an eigenvalue can be described by the ratio of its normwise and componentwise condition number. Furthermore, the effect of linearization on optimally scaled polynomials is investigated. We introduce a generalization of the diagonal scaling by Lemonnier and Van Dooren to PEPs that is especially effective if some information about the magnitude of the wanted eigenvalues is available and also discuss variable transformations of the type λ = αµ for PEPs of arbitrary degree.
Perturbation, Extraction and Refinement of Invariant Pairs for Matrix Polynomials
, 2010
"... Generalizing the notion of an eigenvector, invariant subspaces are frequently used in the context of linear eigenvalue problems, leading to conceptually elegant and numerically stable formulations in applications that require the computation of several eigenvalues and/or eigenvectors. Similar benefi ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
Generalizing the notion of an eigenvector, invariant subspaces are frequently used in the context of linear eigenvalue problems, leading to conceptually elegant and numerically stable formulations in applications that require the computation of several eigenvalues and/or eigenvectors. Similar benefits can be expected for polynomial eigenvalue problems, for which the concept of an invariant subspace needs to be replaced by the concept of an invariant pair. Little has been known so far about numerical aspects of such invariant pairs. The aim of this paper is to fill this gap. The behavior of invariant pairs under perturbations of the matrix polynomial is studied and a firstorder perturbation expansion is given. From a computational point of view, we investigate how to best extract invariant pairs from a linearization of the matrix polynomial. Moreover, we describe efficient refinement procedures directly based on the polynomial formulation. Numerical experiments with matrix polynomials from a number of applications demonstrate the effectiveness of our extraction and refinement procedures. 1
An Algorithm for the Complete Solution of Quadratic Eigenvalue Problems
, 2012
"... We develop a new algorithm for the computation of all the eigenvalues and optionally the right and left eigenvectors of dense quadratic matrix polynomials. It incorporates scaling of the problem parameters prior to the computation of eigenvalues, a choice of linearization with favorable conditioning ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
We develop a new algorithm for the computation of all the eigenvalues and optionally the right and left eigenvectors of dense quadratic matrix polynomials. It incorporates scaling of the problem parameters prior to the computation of eigenvalues, a choice of linearization with favorable conditioning and backward stability properties, and a preprocessing step that reveals and deflates the zero and infinite eigenvalues contributed by singular leading and trailing matrix coefficients. The algorithm is backward stable for quadratics that are not too heavily damped. Numerical experiments show that our MATLAB implementation of the algorithm, quadeig, outperforms the MATLAB function polyeig in terms of both stability and efficiency.
Tropical Scaling of Polynomial Matrices
, 905
"... Abstract The eigenvalues of a matrix polynomial can be determined classically by solving a generalized eigenproblem for a linearized matrix pencil, for instance by writing the matrix polynomial in companion form. We introduce a general scaling technique, based on tropical algebra, which applies in p ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Abstract The eigenvalues of a matrix polynomial can be determined classically by solving a generalized eigenproblem for a linearized matrix pencil, for instance by writing the matrix polynomial in companion form. We introduce a general scaling technique, based on tropical algebra, which applies in particular to this companion form. This scaling, which is inspired by an earlier work of Akian, Bapat, and Gaubert, relies on the computation of “tropical roots”. We give explicit bounds, in a typical case, indicating that these roots provide accurate estimates of the order of magnitude of the different eigenvalues, and we show by experiments that this scaling improves the accuracy (measured by normwise backward error) of the computations, particularly in situations in which the data have various orders of magnitude. In the case of quadratic polynomial matrices, we recover in this way a scaling due to Fan, Lin, and Van Dooren, which coincides with the tropical scaling when the two tropical roots are equal. If not, the eigenvalues generally split in two groups, and the tropical method leads to making one specific scaling for each of the groups.