Results 1 
7 of
7
Subspace Methods for Sparse Eigenvalue Problems in Modern Methods and Algorithms of Quantum Chemistry
 J. Grotendorst (Ed.), John von Neumann Institute for Computing, NIC Series
, 2000
"... Permission to make digital or hard copies of portions of this work for personal or classroom use is granted provided that the copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise requires pri ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Permission to make digital or hard copies of portions of this work for personal or classroom use is granted provided that the copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise requires prior specific permission by the publisher mentioned above.
Parallel Linear Algebra Methods
"... Permission to make digital or hard copies of portions of this work for personal or classroom use is granted provided that the copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise requires pri ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Permission to make digital or hard copies of portions of this work for personal or classroom use is granted provided that the copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise requires prior specific permission by the publisher mentioned above.
Performance of singleprocessor BLAS on IBMp690
, 2004
"... Abstract The Basic Linear Algebra Subprograms, BLAS, are the basic computational kernels in most applications. BLAS 1 and BLAS 2, the vectorvector and matrixvector routines, require memory accesses in the same order ascomputations and thus cannot achieve performance close to peak performance on m ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract The Basic Linear Algebra Subprograms, BLAS, are the basic computational kernels in most applications. BLAS 1 and BLAS 2, the vectorvector and matrixvector routines, require memory accesses in the same order ascomputations and thus cannot achieve performance close to peak performance on modern computer architectures. BLAS 3 matrixmatrix operationson n * nmatrices on the other side can do order n3 operations with only order n2 memory accesses. This much better ratio of computation to memoryaccess allows for much higher performance. To show which performance can be expected using the BLAS routines from IBM's ESSL on an IBM p690 weinvestigated the performance of one routine of each BLAS level and compared it to that of the corresponding routines on a CRAY T3E. ii
for Large Complex Hermitian Eigenvalue Problems
"... to make digital or hard copies of portions of this work for personal or classroom use is granted provided that the copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise requires prior specific ..."
Abstract
 Add to MetaCart
to make digital or hard copies of portions of this work for personal or classroom use is granted provided that the copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise requires prior specific permission by the publisher mentioned above.
Parallel Computers
"... Permission to make digital or hard copies of portions of this work for personal or classroom use is granted provided that the copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise requires pri ..."
Abstract
 Add to MetaCart
(Show Context)
Permission to make digital or hard copies of portions of this work for personal or classroom use is granted provided that the copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise requires prior specific permission by the publisher mentioned above.
EIGENVALUE PROBLEMS
, 2001
"... The calculation of a few interior eigenvalues of a matrix has not received much attention in the past, most methods being some spinoff of either the complete eigenvalue calculation or a subspace method designed for the extremal part of the spectrum. The reason for this could be the rather chaotic b ..."
Abstract
 Add to MetaCart
The calculation of a few interior eigenvalues of a matrix has not received much attention in the past, most methods being some spinoff of either the complete eigenvalue calculation or a subspace method designed for the extremal part of the spectrum. The reason for this could be the rather chaotic behaviour of most methods tried. Only ’shift and invert ’ and polynomial iteration seemed to have a predictable behavior. However, polynomial iteration is reasonably fast only for extremal eigenvalues of a matrix where all eigenvalues are close to a known line, and inverting a large sparse indefinite system is tricky, while any inaccuracy in the inverse carries through to the eigenvector. By now, subspace methods have been developed to a state where they can be applied with benefit to the calculation of inner eigenpairs (eigenvalues andvectors). This is achieved by using a combination of improved approximate residual correction (JacobiDavidson method) with new methods to extract approximations to inner eigenvectors of a large (dimension n) matrix from a low dimensional (dimension m ≪ √ n) subspace. Suited to the needs of practical applications, the selection