Results 1  10
of
13
NLEVP: A Collection of Nonlinear Eigenvalue Problems
, 2010
"... We present a collection of 46 nonlinear eigenvalue problems in the form of a MATLAB toolbox. The collection contains problems from models of reallife applications as well as ones constructed specifically to have particular properties. A classification is given of polynomial eigenvalue problems acco ..."
Abstract

Cited by 51 (12 self)
 Add to MetaCart
We present a collection of 46 nonlinear eigenvalue problems in the form of a MATLAB toolbox. The collection contains problems from models of reallife applications as well as ones constructed specifically to have particular properties. A classification is given of polynomial eigenvalue problems according to their structural properties. Identifiers based on these and other properties can be used to extract particular types of problems from the collection. A brief description of each problem is given. NLEVP serves both to illustrate the tremendous variety of applications of nonlinear Eigenvalue problems and to provide representative problems for testing, tuning, and benchmarking of algorithms and codes.
LowRank Tensor Krylov Subspace Methods for Parametrized Linear Systems
, 2010
"... We consider linear systems A(α)x(α) = b(α) depending on possibly many parameters α = (α1,...,αp). Solving these systems simultaneously for a standard discretization of the parameter space would require a computational effort growing exponentially in the number of parameters. We show that this curse ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
(Show Context)
We consider linear systems A(α)x(α) = b(α) depending on possibly many parameters α = (α1,...,αp). Solving these systems simultaneously for a standard discretization of the parameter space would require a computational effort growing exponentially in the number of parameters. We show that this curse of dimensionality can be avoided for sufficiently smooth parameter dependencies. For this purpose, computational methods are developed that benefit from the fact that x(α) can be well approximated by a tensor of low rank. In particular, lowrank tensor variants of shortrecurrence Krylov subspace methods are presented. Numerical experiments for deterministic PDEs with parametrized coefficients and stochastic elliptic PDEs demonstrate the effectiveness of our approach.
Spectral Equivalence of Matrix Polynomials and the Index Sum Theorem
, 2013
"... The concept of linearization is fundamental for theory, applications, and spectral computations related to matrix polynomials. However, recent research on several important classes of structured matrix polynomials arising in applications has revealed that the strategy of using linearizations to deve ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
The concept of linearization is fundamental for theory, applications, and spectral computations related to matrix polynomials. However, recent research on several important classes of structured matrix polynomials arising in applications has revealed that the strategy of using linearizations to develop structurepreserving numerical algorithms that compute the eigenvalues of structured matrix polynomials can be too restrictive, because some structured polynomials do not have any linearization with the same structure. This phenomenon strongly suggests that linearizations should sometimes be replaced by other low degree matrix polynomials in applied numerical computations. Motivated by this fact, we introduce equivalence relations that allow the possibility of matrix polynomials (with coefficients in an arbitrary field) to be equivalent, with the same spectral structure, but have different sizes and degrees. These equivalence relations are directly modeled on the notion of linearization, and consequently inherit the simplicity, applicability, and most relevant properties of linearizations; simultaneously, though, they are much more flexible in the possible degrees of equivalent polynomials. This flexibility allows us to define in a unified way the notions of quadratification and lification, to introduce the concept
Perturbation, Extraction and Refinement of Invariant Pairs for Matrix Polynomials
, 2010
"... Generalizing the notion of an eigenvector, invariant subspaces are frequently used in the context of linear eigenvalue problems, leading to conceptually elegant and numerically stable formulations in applications that require the computation of several eigenvalues and/or eigenvectors. Similar benefi ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
Generalizing the notion of an eigenvector, invariant subspaces are frequently used in the context of linear eigenvalue problems, leading to conceptually elegant and numerically stable formulations in applications that require the computation of several eigenvalues and/or eigenvectors. Similar benefits can be expected for polynomial eigenvalue problems, for which the concept of an invariant subspace needs to be replaced by the concept of an invariant pair. Little has been known so far about numerical aspects of such invariant pairs. The aim of this paper is to fill this gap. The behavior of invariant pairs under perturbations of the matrix polynomial is studied and a firstorder perturbation expansion is given. From a computational point of view, we investigate how to best extract invariant pairs from a linearization of the matrix polynomial. Moreover, we describe efficient refinement procedures directly based on the polynomial formulation. Numerical experiments with matrix polynomials from a number of applications demonstrate the effectiveness of our extraction and refinement procedures. 1
An Algorithm for the Complete Solution of Quadratic Eigenvalue Problems
, 2012
"... We develop a new algorithm for the computation of all the eigenvalues and optionally the right and left eigenvectors of dense quadratic matrix polynomials. It incorporates scaling of the problem parameters prior to the computation of eigenvalues, a choice of linearization with favorable conditioning ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
We develop a new algorithm for the computation of all the eigenvalues and optionally the right and left eigenvectors of dense quadratic matrix polynomials. It incorporates scaling of the problem parameters prior to the computation of eigenvalues, a choice of linearization with favorable conditioning and backward stability properties, and a preprocessing step that reveals and deflates the zero and infinite eigenvalues contributed by singular leading and trailing matrix coefficients. The algorithm is backward stable for quadratics that are not too heavily damped. Numerical experiments show that our MATLAB implementation of the algorithm, quadeig, outperforms the MATLAB function polyeig in terms of both stability and efficiency.
Perturbation, Computation and Refinement of Invariant Pairs for Matrix Polynomials
, 2009
"... Generalizing the notion of an eigenvector, invariant subspaces are frequently used in the context of linear eigenvalue problems, leading to conceptually elegant and numerically stable formulations in applications that require the computation of several eigenvalues and/or eigenvectors. Similar benefi ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Generalizing the notion of an eigenvector, invariant subspaces are frequently used in the context of linear eigenvalue problems, leading to conceptually elegant and numerically stable formulations in applications that require the computation of several eigenvalues and/or eigenvectors. Similar benefits can be expected for polynomial eigenvalue problems, for which the concept of invariant subspaces needs to be replaced by the concept of invariant pair. Little is known so far about numerical aspects of such invariant pairs. The aim of this paper is to fill this gap. The behavior of invariant pairs under perturbations of the matrix polynomial is studied and a firstorder perturbation expansion is given. From a computational point of view, we investigate how to best extract invariant pairs from a linearization of the matrix polynomial. Moreover, we describe efficient refinement procedures directly based on the polynomial formulation. Numerical experiments with matrix polynomials from a number of applications demonstrate the effectiveness of our extraction and refinement procedures.
ISSN 17499097Perturbation, Computation and Refinement of Invariant Pairs for Matrix Polynomials
, 2009
"... Generalizing the notion of an eigenvector, invariant subspaces are frequently used in the context of linear eigenvalue problems, leading to conceptually elegant and numerically stable formulations in applications that require the computation of several eigenvalues and/or eigenvectors. Similar benefi ..."
Abstract
 Add to MetaCart
(Show Context)
Generalizing the notion of an eigenvector, invariant subspaces are frequently used in the context of linear eigenvalue problems, leading to conceptually elegant and numerically stable formulations in applications that require the computation of several eigenvalues and/or eigenvectors. Similar benefits can be expected for polynomial eigenvalue problems, for which the concept of invariant subspaces needs to be replaced by the concept of invariant pair. Little is known so far about numerical aspects of such invariant pairs. The aim of this paper is to fill this gap. The behavior of invariant pairs under perturbations of the matrix polynomial is studied and a firstorder perturbation expansion is given. From a computational point of view, we investigate how to best extract invariant pairs from a linearization of the matrix polynomial. Moreover, we describe efficient refinement procedures directly based on the polynomial formulation. Numerical experiments with matrix polynomials from a number of applications demonstrate the effectiveness of our extraction and refinement procedures. 1
Switzerland LowRank Tensor Krylov Subspace Methods for Parametrized Linear Systems ∗
, 2010
"... We consider linear systems A(α)x(α) =b(α) depending on possibly many parameters α =(α1,..., αp). Solving these systems simultaneously for a standard discretization of the parameter space would require a computational effort growing exponentially in the number of parameters. We show that this curse o ..."
Abstract
 Add to MetaCart
(Show Context)
We consider linear systems A(α)x(α) =b(α) depending on possibly many parameters α =(α1,..., αp). Solving these systems simultaneously for a standard discretization of the parameter space would require a computational effort growing exponentially in the number of parameters. We show that this curse of dimensionality can be avoided for sufficiently smooth parameter dependencies. For this purpose, computational methods are developed that benefit from the fact that x(α) can be well approximated by a tensor of low rank. In particular, lowrank tensor variants of shortrecurrence Krylov subspace methods are presented. Numerical experiments for deterministic PDEs with parametrized coefficients and stochastic elliptic PDEs demonstrate the effectiveness of our approach. 1