#### DMCA

## LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares (1982)

Venue: | ACM Trans. Math. Software |

Citations: | 653 - 21 self |

### Citations

1137 |
Methods of conjugate gradients for solving linear systems
- Hestenes, Stiefel
- 1952
(Show Context)
Citation Context ... = n, but these conditions are not essential. The method, to be called algorithm LSQR, is similar in style to the well-known method of conjugate gradients (CG) as applied to the least-squares problem =-=[10]-=-. The matrix A is used only to compute products of the form Av and ATu for various vectors v and u. Hence A will normally be large and sparse or will be expressible as a product of matrices that are s... |

537 | An iteration method for the solution of the eigenvalue problem of linear differential and integral operators
- Lanczos
- 1950
(Show Context)
Citation Context ...ecision of floating-point arithmetic is e, the smallest machine-representable number such that 1 + e > 1. 2. MOTIVATION VIA THE LANCZOS PROCESS In this section we review the symmetric Lanczos process =-=[13]-=- and its use in solving symmetric linear equations Bx = b. Algorithm LSQR is then derived by applying the Lanczos process to a particular symmetric system. Although a more direct development is given ... |

323 |
Calculating the singular values and pseudo-inverse of a matrix
- Golub, Kahan
- 1965
(Show Context)
Citation Context ...ned matrix for which CG-like methods will converge more quickly. Some such transformation methods are considered in [21]. Algorithm LSQR is based on the bidiagonalization procedure of Golub and Kahan =-=[9]-=-. It generates a sequence of approximations {xk } such that the residual norm II rk [[2 decreases monotonically, where rk = b - Axk. Analytically, the sequence (xh} is identical to the sequence genera... |

123 |
Numerical methods for solving linear least squares problems,
- Golub
- 1965
(Show Context)
Citation Context ...e led naturally to the least-squares problem min II/ , el -- Bkyk II (4.5) which forms the basis for LSQR. Computationally, it is advantageous to solve (4.5) using the standard QR factorization of Bk =-=[8]-=-, that is, the same factorization (3.10) that links the two bidiagonalizations. This takes the form pl 82 p2 03 t , Qo ° ~2 pk-1 84 ~k-~ Pk (4.6) where Qk = Qk, k÷l... Q2,3QI,2 is a product of plane r... |

82 |
Statistical Computing,
- Kennedy, Gentle
- 1980
(Show Context)
Citation Context ...sparse or will be expressible as a product of matrices that are sparse or have special structure. A typical application is to the large least-squares problems arising from analysis of variance (6.g., =-=[12]-=-). CG-like methods are iterative in nature. They are characterized by their need for only a few vectors of working storage and by their theoretical convergence within at most n iterations (if exact ar... |

59 | Accelerated projection methods for computing pseudoinverse solutions of systems of linear equations,
- Bjorck, Elfving
- 1979
(Show Context)
Citation Context ...ors v,+~ and d, in LSQR are proportional to si and p,, respectively. Note that qi and s, just given can share the same workspace. A FORTRAN implementation of CGLS has been given by Bj6rck and Elfving =-=[3]-=-. This incorporates an acceleration (preconditioning) technique in a way that requires minimal additional storage. 7.2 Craig's Method for Ax ffi b A very simple method is known for solving compatible ... |

55 |
Error analysis of the Lanczos algorithm for tridiagonalizing a symmetric matrix
- Paige
- 1976
(Show Context)
Citation Context ...t development is given in Section 4, the present derivation may remain useful for a future error analysis of LSQR, since many of the rounding error properties of the Lanczos process are already known =-=[18]-=-. ACM Transactions on Mathematmal Software, Vol 8, No 1, March 1982.LSQR: An Algorithm for Sparse Linear Equattons and Sparse Least Squares • 45 Given a symmetric matrix B and a starting vector b, th... |

51 |
The solution of sparse linear least squares problems using Givens rotations.
- GEORGE, HEATH
- 1980
(Show Context)
Citation Context ...arable precision (Figures 1-4). 9. SUMMARY A direct method may often be preferable to the iterative methods discussed here; for instance, the methods given by Bjorck and Duff [2] and George and Heath =-=[7]-=- show great promise for sparse least squares. Nevertheless, iterative methods will always retain advantages for certain applications. For example, conjugategradient methods converge extremely quickly ... |

43 |
LSQR: Sparse linear equations and least squares problem,
- Paige, Saunders
- 1982
(Show Context)
Citation Context ...by several other published algorithms. However, LSQR is shown by example to be numerically more reliable in various circumstances than the other methods considered. The FORTRAN implementation of LSQR =-=[22]-=- is designed for practical application. It incorporates reliable stopping criteria and provides the user with computed estimates of the following quantities: x, r = b - Ax, A Tr, II r 112, It A II F, ... |

16 |
Bidiagonalization of Matrices and Solution of Linear Equations
- Paige
- 1974
(Show Context)
Citation Context ...e. 7.2 Craig's Method for Ax ffi b A very simple method is known for solving compatible systems Ax ffi b. This is Craig's method, as described in [6]. It is derivable from Bidiag 1, as shown by Paige =-=[17]-=-, and differs from all other methods discussed here by minimizing the error norm [I xk - x H at each step, rather than the residual norm U b - Axk [[ ffi II A (xk - x)I[. We review the derivation brie... |

14 |
Algorithms for Sparse Matrix Eigenvalue Problems
- Lewis
- 1976
(Show Context)
Citation Context ...0] can be applied to singular symmetric systems, and that extreme growth in the resulting II xk II forms an essential part of a practical method for computing eigenvectors of large symmetric matrices =-=[14]-=-. By analogy, in the presence of rounding errors LSQR will usually produce an approximate singular vector of the matrix A. In fact, using (6.1) and II rk II - II b II, we see that the normalized vecto... |

11 |
On the conjugate gradient method for solving linear least squares problems
- ELFVING
- 1978
(Show Context)
Citation Context ...ted by CRAIG, CGLS, RRLS, and RRLSQR. The machine used was a Burroughs B6700 with relative precision ~ = 0.5 × 8 -12 = 0.7 × 10 -11. The results given here are complementary to those given by Elfving =-=[5]-=-, who compares CGLS with several other conjugate-gradient algorithms and also investigates their performance on problems where A is singular. 8.1 Generation of Test Problems The following steps may be... |

7 |
Aspects of generalized inverses in analysis and regularization," in Generalized lnverses and Applications
- Nashed
- 1976
(Show Context)
Citation Context ...ller o, are very close together, and therefore suggests criterion $3 as a means of regularizing such problems when they are very ill-conditioned, as in the discretization of ill-posed problems (e.g., =-=[15]-=-). For example, if the singular values of A were known to be of order 1, 0.9, 10 -3, 10 -6, 10 -7, the effect of the two smallest singular values could probably be suppressed by setting CONLIM = 10 4.... |

7 |
Research, development and LINPACK
- Stewart
- 1977
(Show Context)
Citation Context ...nalysis and with knowledge of data accuracy. Since this argument does not depend on orthogonality, $1 can be used in any method for solving compatible linear systems. 6.2 Incompatible Systems Stewart =-=[23]-=- has observed that if and where rk ffi b -- Axk fk = b - (A + Ek)Xk rkrWA iir, then (A + Ek) T rk = 0, SO that Xk and fk are the exact solution and residual for a system with A perturbed. Since H Ek 1... |

4 |
A direct method .for the solution of sparse linear least squares problems
- BJRCK, DUFF
- 1980
(Show Context)
Citation Context ...iterations to obtain comparable precision (Figures 1-4). 9. SUMMARY A direct method may often be preferable to the iterative methods discussed here; for instance, the methods given by Bjorck and Duff =-=[2]-=- and George and Heath [7] show great promise for sparse least squares. Nevertheless, iterative methods will always retain advantages for certain applications. For example, conjugategradient methods co... |

4 |
Iterative methods for linear least squares problems
- Chen
- 1975
(Show Context)
Citation Context ...to be estimated from ]] Dk IIF, as already described. Algorithms LSCG and LSLQ need not be considered further. 7.5 Chen's Algorithm RRLS Another algorithm based on Bidiag 2 has been described by Chen =-=[4]-=-. This is algorithm RRLS, and it combines Bidiag 2 with the so-called residual-reducing method of Householder [11]. In the notation of Section 3 it may be described as follows. The residual-reducing p... |

3 |
Terminating and non-terminating iterations for solving linear systems
- Householder
- 1955
(Show Context)
Citation Context ...5 Chen's Algorithm RRLS Another algorithm based on Bidiag 2 has been described by Chen [4]. This is algorithm RRLS, and it combines Bidiag 2 with the so-called residual-reducing method of Householder =-=[11]-=-. In the notation of Section 3 it may be described as follows. The residual-reducing property is implicit in steps 2(b) and (c). Algorithm RRLS (1) Set ro ffi b, 8~v~ = ATb, wl = vl, Xo = O. ACM Trans... |

1 |
Use of conjugate gradients for solving linear least squares problems
- BJORCK
- 1979
(Show Context)
Citation Context ...le in Figure 4, loglo H ATrk [[ = --13.9 and loglo I[xh - x ][ = -4.6 at k = 36, with little change for larger k. 8.7 Other Results Algorithms CGLS and LSQR have been compared independently by BjSrck =-=[1]-=-, confirming that on both compatible and incompatible systems LSQR is likely to ACM Transactions on Mathematical Software, Vol 8, No 1, March 1982LSQR' An Algorithm for Sparse Linear Equations and Sp... |

1 |
The Algebraw Etgenvalue Problem
- WILKINSON
- 1965
(Show Context)
Citation Context ...ame factorization (3.10) that links the two bidiagonalizations. This takes the form pl 82 p2 03 t , Qo ° ~2 pk-1 84 ~k-~ Pk (4.6) where Qk = Qk, k÷l... Q2,3QI,2 is a product of plane rotations (e.g., =-=[25]-=-) designed to eliminate the subdiagonals f12, fla .... of Bk. The vectors yk and tk+l could then be found from Rkyk = h, (4.7) [o] tk+~ = Q[ ~h+~ " (4.8) However, yk in (4.7) will normally have r/o el... |