Results 1 
6 of
6
NLEVP: A Collection of Nonlinear Eigenvalue Problems
, 2010
"... We present a collection of 46 nonlinear eigenvalue problems in the form of a MATLAB toolbox. The collection contains problems from models of reallife applications as well as ones constructed specifically to have particular properties. A classification is given of polynomial eigenvalue problems acco ..."
Abstract

Cited by 51 (12 self)
 Add to MetaCart
We present a collection of 46 nonlinear eigenvalue problems in the form of a MATLAB toolbox. The collection contains problems from models of reallife applications as well as ones constructed specifically to have particular properties. A classification is given of polynomial eigenvalue problems according to their structural properties. Identifiers based on these and other properties can be used to extract particular types of problems from the collection. A brief description of each problem is given. NLEVP serves both to illustrate the tremendous variety of applications of nonlinear Eigenvalue problems and to provide representative problems for testing, tuning, and benchmarking of algorithms and codes.
Parallel Krylov solvers for the polynomial eigenvalue problem
, 2015
"... Abstract. Polynomial eigenvalue problems are often found in scientific computing applications. When the coefficient matrices of the polynomial are large and sparse, usually only a few eigenpairs are required and projection methods are the best choice. We focus on Krylov methods that operate on the c ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Polynomial eigenvalue problems are often found in scientific computing applications. When the coefficient matrices of the polynomial are large and sparse, usually only a few eigenpairs are required and projection methods are the best choice. We focus on Krylov methods that operate on the companion linearization of the polynomial, but exploit the block structure with the aim of being memoryefficient in the representation of the Krylov subspace basis. The problem may appear in the form of a lowdegree polynomial (quartic or quintic, say) expressed in the monomial basis, or a highdegree polynomial (coming from interpolation of a nonlinear eigenproblem) expressed in a nonmonomial basis. We have implemented a parallel solver in SLEPc covering both cases, that is able to compute exterior as well as interior eigenvalues via spectral transformation. We discuss important issues such as scaling and restart, and illustrate the robustness and performance of the solver with some numerical experiments.
Linearizations for Interpolation Bases
"... A standard approach to solving the polynomial eigenvalue problem is to linearize, which is to say the problem is transformed into an equivalent larger order generalized eigenproblem. For the monomial basis, much work has been done to show the conditions under which linearizations produce small backw ..."
Abstract
 Add to MetaCart
(Show Context)
A standard approach to solving the polynomial eigenvalue problem is to linearize, which is to say the problem is transformed into an equivalent larger order generalized eigenproblem. For the monomial basis, much work has been done to show the conditions under which linearizations produce small backward errors, especially for the quadratic eigenvalue problem [3, 4]. Recently, there has been increasing interest in linearizations of polynomials expressed in bases other than the classical monomial basis [1]. In these bases, there is a need to establish the conditions under which linearizations return eigenvalue and eigenvector estimates with small backward errors. In this work, we investigate the accuracy and stability of polynomial eigenvalue problems solved by linearization. The polynomial eigenvalue problems are expressed in the Lagrange basis, that is, by their values at distinct interpolation nodes. We also utilize the barycentric Lagrange formulation of the polynomial matrices, since the linearizations that arise from this formulation are particularly simple, and are flexible for computations. Anm bymmatrix polynomialP(λ) of degree n, expressed in the barycentric Lagrange formulation is P(λ) = n∏ i=0 (λ − xi) n∑ j=0 wj λ − xjFj, w −1 j = k 6=j (xj − xk). The numbers wj are known as the barycentric weights, and the coefficients Fj = P(xj) ∈ Cm×m are the samples of P(λ) at the n+ 1 interpolation nodes xj. An (n+ 2)m by (n+ 2)m linearization of the matrix polynomial P(λ) is given by [2] λB−A =
Abstract. We consider quadratic eigenproblems
"... x = 0, where all coefficient matrices are real and positive semidefinite, (M,K) is regular andD is of low rank. Matrix polynomials of this form appear in the analysis of vibrating structures with discrete dampers. We develop an algorithm for such problems, which first solves the undamped problem ..."
Abstract
 Add to MetaCart
(Show Context)
x = 0, where all coefficient matrices are real and positive semidefinite, (M,K) is regular andD is of low rank. Matrix polynomials of this form appear in the analysis of vibrating structures with discrete dampers. We develop an algorithm for such problems, which first solves the undamped problem
ALGORITHMS FOR HESSENBERG–TRIANGULAR REDUCTION OF FIEDLER LINEARIZATION OF MATRIX POLYNOMIALS ∗
"... Abstract. Small to mediumsized polynomial eigenvalue problems can be solved by linearizing the matrix polynomial and solving the resulting generalized eigenvalue problem using the QZ algorithm. The QZ algorithm, in turn, requires an initial reduction of a matrix pair to Hessenberg– triangular for ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Small to mediumsized polynomial eigenvalue problems can be solved by linearizing the matrix polynomial and solving the resulting generalized eigenvalue problem using the QZ algorithm. The QZ algorithm, in turn, requires an initial reduction of a matrix pair to Hessenberg– triangular form. In this paper, we discuss the design and evaluation of highperformance parallel algorithms and software for Hessenberg–triangular reduction of a specific linearization of matrix polynomials of arbitrary degree. The proposed algorithm exploits the sparsity structure of the linearization to reduce the number of operations and improve the cache reuse compared to existing algorithms for unstructured inputs. Experiments on both a workstation and an HPC system demonstrate that our structureexploiting parallel implementation can outperform both the general LAPACK routine DGGHRD and the prototype implementation DGGHR3 of a general blocked algorithm. Key words. Hessenbergtriangular reduction, polynomial eigenvalue problem, linearization, blocked algorithm, parallelization