Results 1  10
of
43
Symmetric linearizations for matrix polynomials
 SIAM J. MATRIX ANAL. APPL
, 2006
"... A standard way of treating the polynomial eigenvalue problem P(λ)x = 0 is to convert it into an equivalent matrix pencil—a process known as linearization. Two vector spaces of pencils L1(P) and L2(P), and their intersection DL(P), have recently been defined and studied by Mackey, Mackey, Mehl, and M ..."
Abstract

Cited by 54 (19 self)
 Add to MetaCart
(Show Context)
A standard way of treating the polynomial eigenvalue problem P(λ)x = 0 is to convert it into an equivalent matrix pencil—a process known as linearization. Two vector spaces of pencils L1(P) and L2(P), and their intersection DL(P), have recently been defined and studied by Mackey, Mackey, Mehl, and Mehrmann. The aim of our work is to gain new insight into these spaces and the extent to which their constituent pencils inherit structure from P. For arbitrary polynomials we show that every pencil in DL(P) is block symmetric and we obtain a convenient basis for DL(P) built from block Hankel matrices. This basis is then exploited to prove that the first deg(P) pencils in a sequence constructed by Lancaster in the 1960s generate DL(P). When P is symmetric, we show that the symmetric pencils in L1(P) comprise DL(P), while for Hermitian P the Hermitian pencils in L1(P) form a proper subset of DL(P) that we explicitly characterize. Almost all pencils in each of these subsets are shown to be linearizations. In addition to obtaining new results, this work provides a selfcontained treatment of some of the key properties of DL(P) together with some new, more concise proofs.
Linearizations of singular matrix polynomials and the recovery of minimal indices
, 2009
"... A standard way of dealing with a regular matrix polynomial P(λ) is to convert it into an equivalent matrix pencil – a process known as linearization. Two vector spaces of pencils L1(P) and L2(P) that generalize the first and second companion forms have recently been introduced by Mackey, Mackey, M ..."
Abstract

Cited by 23 (10 self)
 Add to MetaCart
(Show Context)
A standard way of dealing with a regular matrix polynomial P(λ) is to convert it into an equivalent matrix pencil – a process known as linearization. Two vector spaces of pencils L1(P) and L2(P) that generalize the first and second companion forms have recently been introduced by Mackey, Mackey, Mehl and Mehrmann. Almost all of these pencils are linearizations for P (λ) when P is regular. The goal of this work is to show that most of the pencils in L1(P)andL2(P)arestill linearizations when P (λ) is a singular square matrix polynomial, and that these linearizations can be used to obtain the complete eigenstructure of P (λ), comprised not only of the finite and infinite eigenvalues, but also for singular polynomials of the left and right minimal indices and minimal bases. We show explicitly how to recover the minimal indices and bases of the polynomial P (λ) fromthe minimalindices and bases of linearizations in L1(P)andL2(P). As a consequence of the recovery formulae for minimal indices, we prove that the vector space DL(P)=L1(P) ∩ L2(P)will never contain any linearization for a square singular polynomial P (λ). Finally, the results are extended to other linearizations of singular polynomials defined in terms of more general polynomial bases.
Scaling, sensitivity and stability in the numerical solution of quadratic eigenvalue problems
 Internat. J. Numer. Methods Eng
, 2006
"... The most common way of solving the quadratic eigenvalue problem (QEP) (λ 2 M +λD+K)x = 0 is to convert it into a linear problem (λX +Y)z = 0 of twice the dimension and solve the linear problem by the QZ algorithm or a Krylov method. In doing so, it is important to understand the influence of the lin ..."
Abstract

Cited by 22 (9 self)
 Add to MetaCart
(Show Context)
The most common way of solving the quadratic eigenvalue problem (QEP) (λ 2 M +λD+K)x = 0 is to convert it into a linear problem (λX +Y)z = 0 of twice the dimension and solve the linear problem by the QZ algorithm or a Krylov method. In doing so, it is important to understand the influence of the linearization process on the accuracy and stability of the computed solution. We discuss these issues for three particular linearizations: the standard companion linearization and two linearizations that preserve symmetry in the problem. For illustration we employ a model QEP describing the motion of a beam simply supported at both ends and damped at the midpoint. We show that the above linearizations lead to poor numerical results for the beam problem, but that a twoparameter scaling proposed by Fan, Lin and Van Dooren cures the instabilities. We also show that half of the eigenvalues of the beam QEP are pure imaginary and are eigenvalues of the undamped problem. Our analysis makes use of recently developed theory explaining the sensitivity and stability of linearizations, the main conclusions of which are summarized. As well as arguing that scaling should routinely be used, we give guidance on how to choose a linearization and illustrate the practical value of condition numbers and backward errors. key words: quadratic eigenvalue problem, sensitivity, condition number, backward error, stability,
Detecting and solving hyperbolic quadratic eigenvalue problems
, 2007
"... Reports available from: And by contacting: ..."
(Show Context)
FIEDLER COMPANION LINEARIZATIONS AND THE RECOVERY OF MINIMAL INDICES
, 2010
"... A standard way of dealing with a matrix polynomial P (λ) is to convert it into an equivalent matrix pencil – a process known as linearization. For any regular matrix polynomial, a new family of linearizations generalizing the classical first and second Frobenius companion forms has recently been i ..."
Abstract

Cited by 19 (11 self)
 Add to MetaCart
A standard way of dealing with a matrix polynomial P (λ) is to convert it into an equivalent matrix pencil – a process known as linearization. For any regular matrix polynomial, a new family of linearizations generalizing the classical first and second Frobenius companion forms has recently been introduced by Antoniou and Vologiannidis, extending some linearizations previously defined by Fiedler for scalar polynomials. We prove that these pencils are linearizations even when P (λ) is a singular square matrix polynomial, and show explicitly how to recover the left and right minimal indices and minimal bases of the polynomial P (λ) from the minimal indices and bases of these linearizations. In addition, we provide a simple way to recover the eigenvectors of a regular polynomial from those of any of these linearizations, without any computational cost. The existence of an eigenvector recovery procedure is essential for a linearization to be relevant for applications.
Definite matrix polynomials and their linearization by definite pencils
 Manchester Institute for Mathematical Sciences, The University of Manchester
, 2008
"... Abstract. Hyperbolic matrix polynomials are an important class of Hermitian matrix polynomials that contain overdamped quadratics as a special case. They share with definite pencils the spectral property that their eigenvalues are real and semisimple. We extend the definition of hyperbolic matrix po ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
(Show Context)
Abstract. Hyperbolic matrix polynomials are an important class of Hermitian matrix polynomials that contain overdamped quadratics as a special case. They share with definite pencils the spectral property that their eigenvalues are real and semisimple. We extend the definition of hyperbolic matrix polynomial in a way that relaxes the requirement of definiteness of the leading coefficient matrix, yielding what we call definite polynomials. We show that this class of polynomials has an elegant characterization in terms of definiteness intervals on the extended real line, and that it includes definite pencils as a special case. A fundamental question is whether a definite matrix polynomial P can be linearized in a structurepreserving way. We show that the answer to this question is affirmative: P is definite if and only if it has a definite linearization in H(P), a certain vector space of Hermitian pencils; and for definite P we give a complete characterization of all the linearizations in H(P) that are definite. For the important special case of quadratics, we show how a definite quadratic polynomial can be transformed into a definite linearization with a positive definite leading coefficient matrix—a form that is particularly attractive numerically.
Optimal scaling of generalized and polynomial eigenvalue problems
 SIAM J. Matrix Anal. Appl
"... Abstract. Scaling is a commonly used technique for standard eigenvalue problems to improve the sensitivity of the eigenvalues. In this paper we investigate scaling for generalized and polynomial eigenvalue problems (PEPs) of arbitrary degree. It is shown that an optimal diagonal scaling of a PEP wit ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Scaling is a commonly used technique for standard eigenvalue problems to improve the sensitivity of the eigenvalues. In this paper we investigate scaling for generalized and polynomial eigenvalue problems (PEPs) of arbitrary degree. It is shown that an optimal diagonal scaling of a PEP with respect to an eigenvalue can be described by the ratio of its normwise and componentwise condition number. Furthermore, the effect of linearization on optimally scaled polynomials is investigated. We introduce a generalization of the diagonal scaling by Lemonnier and Van Dooren to PEPs that is especially effective if some information about the magnitude of the wanted eigenvalues is available and also discuss variable transformations of the type λ = αµ for PEPs of arbitrary degree.
Structured Hölder condition numbers for multiple eigenvalues
, 2006
"... The sensitivity of a multiple eigenvalue of a matrix under perturbations can be measured by its Hölder condition number. Various extensions of this concept are considered. A meaningful notion of structured Hölder condition numbers is introduced and it is shown that many existing results on structure ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
(Show Context)
The sensitivity of a multiple eigenvalue of a matrix under perturbations can be measured by its Hölder condition number. Various extensions of this concept are considered. A meaningful notion of structured Hölder condition numbers is introduced and it is shown that many existing results on structured condition numbers for simple eigenvalues carry over to multiple eigenvalues. The structures investigated in more detail include real, Toeplitz, Hankel, symmetric, skewsymmetric, Hamiltonian, and skewHamiltonian matrices. Furthermore, unstructured and structured Hölder condition numbers for multiple eigenvalues of matrix pencils are introduced. Particular attention is given to symmetric/skewsymmetric, Hermitian and palindromic pencils. It is also shown how matrix polynomial eigenvalue problems can be covered within this framework. 1
Fiedler companion linearizations for rectangular matrix polynomials
, 2011
"... The development of new classes of linearizations of square matrix polynomials that generalize the classical first and second Frobenius companion forms has attracted much attention in the last decade. Research in this area has two main goals: finding linearizations that retain whatever structure the ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
The development of new classes of linearizations of square matrix polynomials that generalize the classical first and second Frobenius companion forms has attracted much attention in the last decade. Research in this area has two main goals: finding linearizations that retain whatever structure the original polynomial might possess, and improving properties that are essential for accurate numerical computation, such as eigenvalue condition numbers and backward errors. However, all recent progress on linearizations has been restricted to square matrix polynomials. Since rectangular polynomials arise in many applications, it is natural to investigate if the new classes of linearizations can be extended to rectangular polynomials. In this paper, the family of Fiedler linearizations is extended from square to rectangular matrix polynomials, and it is shown that minimal indices and bases of polynomials can be recovered from those of any linearization in this class via the same simple procedures developed previously for square polynomials. Fiedler linearizations are one of the most important classes of linearizations introduced in recent years, but their generalization to rectangular polynomials is nontrivial, and requires a completely different approach to the one used in the square case. To the best of our knowledge, this is the first class of new linearizations that has been generalized to rectangular polynomials.