• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

An iteration method for the solution of the eigenvalue problem of linear differential and integral operators (1950)

by C Lanczos
Venue:Journal of Research of the National Bureau of Standards
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 537
Next 10 →

LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares

by Christopher C. Paige, Michael A. Saunders - ACM Trans. Math. Software , 1982
"... An iterative method is given for solving Ax ~ffi b and minU Ax- b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerica ..."
Abstract - Cited by 653 (21 self) - Add to MetaCart
An iterative method is given for solving Ax ~ffi b and minU Ax- b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerical properties. Reliable stopping criteria are derived, along with estimates of standard errors for x and the condition number of A. These are used in the FORTRAN implementation of the method, subroutine LSQR. Numerical tests are described comparing I~QR with several other conjugate-gradient algorithms, indicating that I~QR is the most reliable algorithm when A is ill-conditioned. Categories and Subject Descriptors: G.1.2 [Numerical Analysis]: ApprorJmation--least squares approximation; G.1.3 [Numerical Analysis]: Numerical Linear Algebra--linear systems (direct and
(Show Context)

Citation Context

...ecision of floating-point arithmetic is e, the smallest machine-representable number such that 1 + e > 1. 2. MOTIVATION VIA THE LANCZOS PROCESS In this section we review the symmetric Lanczos process =-=[13]-=- and its use in solving symmetric linear equations Bx = b. Algorithm LSQR is then derived by applying the Lanczos process to a particular symmetric system. Although a more direct development is given ...

QMR: a Quasi-Minimal Residual Method for Non-Hermitian Linear Systems

by Roland W. Freund, Noël M. Nachtigal , 1991
"... ... In this paper, we present a novel BCG-like approach, the quasi-minimal residual (QMR) method, which overcomes the problems of BCG. An implementation of QMR based on a look-ahead version of the nonsymmetric Lanczos algorithm is proposed. It is shown how BCG iterates can be recovered stably from t ..."
Abstract - Cited by 395 (26 self) - Add to MetaCart
... In this paper, we present a novel BCG-like approach, the quasi-minimal residual (QMR) method, which overcomes the problems of BCG. An implementation of QMR based on a look-ahead version of the nonsymmetric Lanczos algorithm is proposed. It is shown how BCG iterates can be recovered stably from the QMR process. Some further properties of the QMR approach are given and an error bound is presented. Finally, numerical experiments are reported.

Efficient Linear Circuit Analysis by Pade Approximation via the Lanczos Process,”

by Peter Feldmann, Roland W Freund - IEEE Trans. Computer-Aided Design, , 1995
"... ..."
Abstract - Cited by 337 (32 self) - Add to MetaCart
Abstract not found

Numerical solution of saddle point problems

by Michele Benzi, Gene H. Golub, Jörg Liesen - ACTA NUMERICA , 2005
"... Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has b ..."
Abstract - Cited by 322 (25 self) - Add to MetaCart
Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has been a surge of interest in saddle point problems, and numerous solution techniques have been proposed for solving this type of systems. The aim of this paper is to present and discuss a large selection of solution methods for linear systems in saddle point form, with an emphasis on iterative methods for large and sparse problems.
(Show Context)

Citation Context

...one may choose Ck = Kk(A T ,r0), which represents a generalization of the projection process characterized in the item (C). Specific implementations based on this choice include the method of Lanczos =-=[316]-=- and the biconjugate gradient (BCG) method of Fletcher [181]. However, for a general nonsymmetric matrix A the process based on Ck = Kk(A T ,r0) is not well defined, because it may happen that no iter...

ARPACK Users Guide: Solution of Large Scale Eigenvalue Problems by Implicitly Restarted Arnoldi Methods.

by R. B. Lehoucq, D. C. Sorensen, C. Yang , 1997
"... this document is intended to provide a cursory overview of the Implicitly Restarted Arnoldi/Lanczos Method that this software is based upon. The goal is to provide some understanding of the underlying algorithm, expected behavior, additional references, and capabilities as well as limitations of the ..."
Abstract - Cited by 218 (18 self) - Add to MetaCart
this document is intended to provide a cursory overview of the Implicitly Restarted Arnoldi/Lanczos Method that this software is based upon. The goal is to provide some understanding of the underlying algorithm, expected behavior, additional references, and capabilities as well as limitations of the software. 1.7 Dependence on LAPACK and BLAS
(Show Context)

Citation Context

...thonormal basis for an invariant subspace of A and that the Ritz values oe(H) ae oe(A) and corresponding Ritz vectors are eigenpairs for A. The second observation leads to the Lanczos/Arnoldi process =-=[2, 18]-=-. 4.3 The Arnoldi Factorization Definition : If A 2 Cn\Theta n then a relation of the form AVk = VkHk + fkeTk -DRAFT- 21 Jan 97sCHAPTER 4. THE IMPLICITLY RESTARTED ARNOLDI METHOD 59 where Vk 2 Cn\Thet...

Krylov Projection Methods For Model Reduction

by Eric James Grimme , 1997
"... This dissertation focuses on efficiently forming reduced-order models for large, linear dynamic systems. Projections onto unions of Krylov subspaces lead to a class of reducedorder models known as rational interpolants. The cornerstone of this dissertation is a collection of theory relating Krylov p ..."
Abstract - Cited by 213 (3 self) - Add to MetaCart
This dissertation focuses on efficiently forming reduced-order models for large, linear dynamic systems. Projections onto unions of Krylov subspaces lead to a class of reducedorder models known as rational interpolants. The cornerstone of this dissertation is a collection of theory relating Krylov projection to rational interpolation. Based on this theoretical framework, three algorithms for model reduction are proposed. The first algorithm, dual rational Arnoldi, is a numerically reliable approach involving orthogonal projection matrices. The second, rational Lanczos, is an efficient generalization of existing Lanczos-based methods. The third, rational power Krylov, avoids orthogonalization and is suited for parallel or approximate computations. The performance of the three algorithms is compared via a combination of theory and examples. Independent of the precise algorithm, a host of supporting tools are also developed to form a complete model-reduction package. Techniques for choosing the matching frequencies, estimating the modeling error, insuring the model's stability, treating multiple-input multiple-output systems, implementing parallelism, and avoiding a need for exact factors of large matrix pencils are all examined to various degrees.
(Show Context)

Citation Context

...tively old. The history of Pad'e approximation, for example, spans more than one hundred years [38]. The algorithm of Lanczos, an important Krylov-based iteration, is nearing its fiftieth anniversary =-=[39]-=-. Yet, as evident by this dissertation and its many recent references, the understanding and application of these concepts is certainly not a closed topic. A large number of the moment-matching method...

An Implementation of the Look-Ahead Lanczos Algorithm for Non-Hermitian Matrices Part I

by Roland W. Freund, Martin H. Gutknecht, Noël M. Nachtigal , 1991
"... ..."
Abstract - Cited by 164 (36 self) - Add to MetaCart
Abstract not found

Matrices, vector spaces, and information retrieval

by Michael W. Berry, Zlatko Drmač, Elizabeth, R. Jessup - SIAM Review , 1999
"... Abstract. The evolution of digital libraries and the Internet has dramatically transformed the processing, storage, and retrieval of information. Efforts to digitize text, images, video, and audio now consume a substantial portion of both academic and industrial activity. Even when there is no short ..."
Abstract - Cited by 143 (3 self) - Add to MetaCart
Abstract. The evolution of digital libraries and the Internet has dramatically transformed the processing, storage, and retrieval of information. Efforts to digitize text, images, video, and audio now consume a substantial portion of both academic and industrial activity. Even when there is no shortage of textual materials on a particular topic, procedures for indexing or extracting the knowledge or conceptual information contained in them can be lacking. Recently developed information retrieval technologies are based on the concept of a vector space. Data are modeled as a matrix, and a user’s query of the database is represented as a vector. Relevant documents in the database are then identified via simple vector operations. Orthogonal factorizations of the matrix provide mechanisms for handling uncertainty in the database itself. The purpose of this paper is to show how such fundamental mathematical concepts from linear algebra can be used to manage and index large text collections. Key words. information retrieval, linear algebra, QR factorization, singular value decomposition, vector spaces
(Show Context)

Citation Context

...rage formats (e.g., Harwell-Boeing) have been developed for this purpose (see [3]). Special techniques for computing the SVD of a sparse matrix include iterative methods such as Arnoldi [41], Lanczos =-=[38, 47]-=-, subspace iteration [49, 47], and trace minimization [53]. All of these methods reference the sparse matrix A only through matrix-vector multiplication operations, and all can be implemented in terms...

A note on the stochastic realization problem

by Anders Lindquist, Giorgio Picci - Hemisphere Publishing Corporation , 1976
"... Abstract. Given a mean square continuous stochastic vector process y with stationary increments and a rational spectral density such that (oo) is finite and nonsingular, consider the problem of finding all minimal (wide sense) Markov representations (stochastic realizations) of y. All such realizati ..."
Abstract - Cited by 133 (28 self) - Add to MetaCart
Abstract. Given a mean square continuous stochastic vector process y with stationary increments and a rational spectral density such that (oo) is finite and nonsingular, consider the problem of finding all minimal (wide sense) Markov representations (stochastic realizations) of y. All such realizations are characterized and classified with respect to deterministic as well as probabilistic properties. It is shown that only certain realizations (internal stochastic realizations) can be determined from the given output process y. All others (external stochastic realizations)require that the probability space be extended with an exogeneous random component. A complete characterization of the sets of internal and external stochastic realizations is provided. It is shown that the state process of any internal stochastic realization can be expressed in terms of two steady-state Kalman-Bucy filters, one evolving forward in time over the infinite past and one backward over the infinite future. An algorithm is presented which generates families Of external realizations defined on the same probability space and totally ordered with respect to state covariances. 1. Introduction. One

Overview and recent advances in partial least squares

by Roman Rosipal, Nicole Krämer - in ‘Subspace, Latent Structure and Feature Selection Techniques’, Lecture Notes in Computer Science , 2006
"... Partial Least Squares (PLS) is a wide class of methods for modeling relations between sets of observed variables by means of latent variables. It comprises of regression and classification tasks as well as dimension reduction techniques and modeling tools. The underlying assumption of all PLS method ..."
Abstract - Cited by 130 (4 self) - Add to MetaCart
Partial Least Squares (PLS) is a wide class of methods for modeling relations between sets of observed variables by means of latent variables. It comprises of regression and classification tasks as well as dimension reduction techniques and modeling tools. The underlying assumption of all PLS methods is that the
(Show Context)

Citation Context

...ogonal. The approximate solution obtained after p steps is equal to the PLS estimator obtained after p iterations. The conjugate gradient algorithm is in turn closely related to the Lanczos algorithm =-=[19]-=-, a method for approximating eigenvalues. The space spanned by the columns of K =(z,Az,...,A p−1 z) is called the p-dimensional Krylov space of A and z. WedenotethisKrylov space by K. In the Lanczos a...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University