Results 1  10
of
126
Parallel Numerical Linear Algebra
, 1993
"... We survey general techniques and open problems in numerical linear algebra on parallel architectures. We first discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing efficient algorithms. We illust ..."
Abstract

Cited by 756 (23 self)
 Add to MetaCart
We survey general techniques and open problems in numerical linear algebra on parallel architectures. We first discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing efficient algorithms. We illustrate these principles using current architectures and software systems, and by showing how one would implement matrix multiplication. Then, we present direct and iterative algorithms for solving linear systems of equations, linear least squares problems, the symmetric eigenvalue problem, the nonsymmetric eigenvalue problem, and the singular value decomposition. We consider dense, band and sparse matrices.
On the Early History of the Singular Value Decomposition
, 1992
"... This paper surveys the contributions of five mathematicians  Eugenio Beltrami (18351899), Camille Jordan (18381921), James Joseph Sylvester (18141897), Erhard Schmidt (18761959), and Hermann Weyl (18851955)  who were responsible for establishing the existence of the singular value de ..."
Abstract

Cited by 122 (1 self)
 Add to MetaCart
This paper surveys the contributions of five mathematicians  Eugenio Beltrami (18351899), Camille Jordan (18381921), James Joseph Sylvester (18141897), Erhard Schmidt (18761959), and Hermann Weyl (18851955)  who were responsible for establishing the existence of the singular value decomposition and developing its theory.
Computing Accurate Eigensystems of Scaled Diagonally Dominant Matrices
, 1980
"... When computing eigenvalues of sym metric matrices and singular values of general matrices in finite precision arithmetic we in general only expect to compute them with an error bound proportional to the product of machine precision and the norm of the matrix. In particular, we do not expect to comp ..."
Abstract

Cited by 96 (14 self)
 Add to MetaCart
When computing eigenvalues of sym metric matrices and singular values of general matrices in finite precision arithmetic we in general only expect to compute them with an error bound proportional to the product of machine precision and the norm of the matrix. In particular, we do not expect to compute tiny eigenvalues and singular values to high relative accuracy. There are some important classes of matrices where we can do much better, including bidiagonal matrices, scaled diagonally dominant matrices, and scaled diagonally dominant definite pencils. These classes include many graded matrices, and all sym metric positive definite matrices which can be consistently ordered (and thus all symmetric positive definite tridiagonal matrices). In particular, the singular values and eigenvalues are determined to high relative precision independent of their magnitudes, and there are algorithms to compute them this accurately. The eigenvectors are also determined more accurately than for general matrices, and may be computed more accurately as well. This work extends results of Kahan and Demmel for bidiagonal and tridiagonal matrices.
Computing the Singular Value Decomposition with High Relative Accuracy
 Linear Algebra Appl
, 1997
"... We analyze when it is possible to compute the singular values and singular vectors of a matrix with high relative accuracy. This means that each computed singular value is guaranteed to have some correct digits, even if the singular values have widely varying magnitudes. This is in contrast to the a ..."
Abstract

Cited by 62 (14 self)
 Add to MetaCart
We analyze when it is possible to compute the singular values and singular vectors of a matrix with high relative accuracy. This means that each computed singular value is guaranteed to have some correct digits, even if the singular values have widely varying magnitudes. This is in contrast to the absolute accuracy provided by conventional backward stable algorithms, whichin general only guarantee correct digits in the singular values with large enough magnitudes. It is of interest to compute the tiniest singular values with several correct digits, because in some cases, such as #nite element problems and quantum mechanics, it is the smallest singular values that havephysical meaning, and should be determined accurately by the data. Many recent papers have identi#ed special classes of matrices where high relative accuracy is possible, since it is not possible in general. The perturbation theory and algorithms for these matrix classes have been quite di#erent, motivating us to seek a co...
Numerical Computation of an Analytic Singular Value Decomposition of a Matrix Valued Function
 NUMER. MATH
, 1991
"... This paper extends the singular value decomposition to a path of matrices E(t). An analytic singular value decomposition of a path of matrices E(t) is an analytic path of factorizations E(t) = X(t)S(t)Y (t) T where X(t) and Y (t) are orthogonal and S(t) is diagonal. To maintain differentiability ..."
Abstract

Cited by 54 (8 self)
 Add to MetaCart
(Show Context)
This paper extends the singular value decomposition to a path of matrices E(t). An analytic singular value decomposition of a path of matrices E(t) is an analytic path of factorizations E(t) = X(t)S(t)Y (t) T where X(t) and Y (t) are orthogonal and S(t) is diagonal. To maintain differentiability the diagonal entries of S(t) are allowed to be either positive or negative and to appear in any order. This paper investigates existence and uniqueness of analytic SVD's and develops an algorithm for computing them. We show that a real analytic path E(t) always admits a real analytic SVD, a fullrank, smooth path E(t) with distinct singular values admits a smooth SVD. We derive a differential equation for the left factor, develop Eulerlike and extrapolated Eulerlike numerical methods for approximating an analytic SVD and prove that the Eulerlike method converges.
Orthogonal Eigenvectors and Relative Gaps
, 2002
"... Let LDLt be the triangular factorization of a real symmetric n\Theta n tridiagonal matrix so that L is a unit lower bidiagonal matrix, D is diagonal. Let (*; v) be an eigenpair, * 6 = 0, with the property that both * and v are determined to high relative accuracy by the parameters in L and D. Suppo ..."
Abstract

Cited by 51 (15 self)
 Add to MetaCart
(Show Context)
Let LDLt be the triangular factorization of a real symmetric n\Theta n tridiagonal matrix so that L is a unit lower bidiagonal matrix, D is diagonal. Let (*; v) be an eigenpair, * 6 = 0, with the property that both * and v are determined to high relative accuracy by the parameters in L and D. Suppose also that the relative gap between * and its nearest neighbor _ in the spectrum exceeds 1=n; nj * \Gamma _j? j*j. This paper presents a new O(n) algorithm and a proof that, in the presence of roundoff error, the algorithm computes an approximate eigenvector ^v that is accurate to working precision: j sin &quot;(v; ^v)j = O(n&quot;), where &quot; is the roundoff unit. It follows that ^v is numerically orthogonal to all the other eigenvectors. This result forms part of a program to compute numerically orthogonal eigenvectors without resorting to the GramSchmidt process. The contents of this paper provide a highlevel description and theoretical justification for LAPACK (version 3.0) subroutine DLAR1V.
Multiple representations to compute orthogonal eigenvectors of symmetric tridiagonal matrices
 Linear Algebra and Appl
, 2004
"... Abstract In this paper we present an O(nk) procedure, Algorithm MR 3, for computing k eigenvectors of an n \Theta n symmetric tridiagonal matrix T. A salient feature of the algorithm is that a number of different LDL t products (L unit lower triangular, D diagonal) are computed. In exact arithmetic ..."
Abstract

Cited by 49 (15 self)
 Add to MetaCart
(Show Context)
Abstract In this paper we present an O(nk) procedure, Algorithm MR 3, for computing k eigenvectors of an n \Theta n symmetric tridiagonal matrix T. A salient feature of the algorithm is that a number of different LDL t products (L unit lower triangular, D diagonal) are computed. In exact arithmetic each LDL t is a factorization of a translate of T. We call the various LDL t
Relative perturbation theory: (ii) eigenspace and singular subspace variations
 SIAM J. Matrix Anal. Appl
, 1998
"... The classical perturbation theory for Hermitian matrix eigenvalue and singular value problems provides bounds on invariant subspace variations that are proportional to the reciprocals of absolute gaps between subsets of spectra or subsets of singular values. These bounds may be bad news for invarian ..."
Abstract

Cited by 39 (3 self)
 Add to MetaCart
(Show Context)
The classical perturbation theory for Hermitian matrix eigenvalue and singular value problems provides bounds on invariant subspace variations that are proportional to the reciprocals of absolute gaps between subsets of spectra or subsets of singular values. These bounds may be bad news for invariant subspaces corresponding to clustered eigenvalues or clustered singular values of much smaller magnitudes than the norms of matrices under considerations when some of these clustered eigenvalues or clustered singular values are perfectly relatively distinguishable from the rest. In this paper, we consider how eigenspaces of a Hermitian matrix A change when it is perturbed toe A = D AD and how singular values of a (nonsquare) matrix B change when it is perturbed toe B = D1 BD2, where D, D1 and D2 are assumed to be close to identity matrices of suitable dimensions, or either D1 or D2 close to some unitary matrix. It is proved that under these kinds of perturbations, the change of invariant subspaces are proportional to the reciprocals of relative gaps between subsets of spectra or subsets of singular values. We have been able to extend wellknown DavisKahan
Relatively Robust Representations of Symmetric Tridiagonals
 LINEAR ALGEBRA AND APPL
, 1999
"... Let LDL t be the triangular factorization of a symmetric tridiagonal matrix T I . Small relative uncertainties in the nontrivial entries of L and D may be represented by diagonal scaling matrices 1 and 2 ; LDL t ! 2 L 1 D 1 L t 2 . The effect of 2 on the eigenvalues i is benign. In this paper ..."
Abstract

Cited by 32 (13 self)
 Add to MetaCart
Let LDL t be the triangular factorization of a symmetric tridiagonal matrix T I . Small relative uncertainties in the nontrivial entries of L and D may be represented by diagonal scaling matrices 1 and 2 ; LDL t ! 2 L 1 D 1 L t 2 . The effect of 2 on the eigenvalues i is benign. In this paper we study the inner perturbations induced by 1 . Suitable condition numbers are introduced and, with the help of orthogonal polynomial theory, illuminating bounds on these condition numbers are obtained. If is close to, and on the `wrong' side of, a Ritz value then there will be large element growth (kLjDjL t k kT Ik) and some of the condition numbers will be large. It is shown that element growth is the only cause of large condition numbers. In particular there exist many values on either side of interior clusters of close eigenvalues such that T I = LDL t , with modest element growth, and the entries of L and D determine the small eigenvalues to high relative a...