#### DMCA

## Subspace iteration randomization and singular value problems. arXiv preprint arXiv:1408.2208 (2014)

Citations: | 4 - 1 self |

### Citations

7696 | Matrix Analysis - Horn, Johnson - 1985 |

3878 | Eigenfaces for recognition
- Turk, Pentland
- 1991
(Show Context)
Citation Context ...ome of the most competitive methods for rapid low-rank matrix approximation, which is vital in many areas of scientific computing, including principal component analysis [47, 65] and face recognition =-=[60, 78]-=-, large scale data compression [21, 22, 35, 56] and fast approximate algorithms for PDEs and integral equations [16, 33, 57, 71, 72, 83, 82]. In this paper, we consider randomized algorithms for low-r... |

3404 |
Principal component analysis
- Jolliffe
- 1986
(Show Context)
Citation Context ...ve established themselves as some of the most competitive methods for rapid low-rank matrix approximation, which is vital in many areas of scientific computing, including principal component analysis =-=[47, 65]-=- and face recognition [60, 78], large scale data compression [21, 22, 35, 56] and fast approximate algorithms for PDEs and integral equations [16, 33, 57, 71, 72, 83, 82]. In this paper, we consider r... |

1144 | Accuracy and Stability of Numerical Algorithms - Higham - 2002 |

1135 |
Methods of conjugate gradients for solving linear systems
- Hestenes, Stiefel
- 1952
(Show Context)
Citation Context ...jugate gradient (PCG) steps to iteratively solve for a linear system of equations Ax = b for a random right hand side b. The PCG is a very popular technique for solving large SPD systems of equations =-=[36, 2]-=-. We refer the reader to [52, 57] for details about the HSS matrix structure and its numerical construction, but emphasize that the key and most time-consuming step for computing HSS preconditioners i... |

768 | Applied Numerical Linear Algebra
- Demmel
- 1997
(Show Context)
Citation Context ...f randomized algorithms in data analysis, see [56]. The subspace iteration is a classical approach for computing singular values. There is extensive convergence analysis on subspace iteration methods =-=[30, 19, 4, 3]-=- and a large literature on accelerated subspace iteration 2 methods [68]. In general, it is well-suited for fast computations on modern computers because its main computations are in terms of matrix-m... |

745 | Face recognition: feature versus templates
- Brunelli, Poggio
- 1993
(Show Context)
Citation Context ...s. Eigenfaces is a well studied method of face recognition based on principal component analysis (PCA), popularised by the seminal work of Turk and Pentland [78]. For more recent work and survey, see =-=[8, 48, 73, 74, 75]-=- and the references therein. In this experiment we demonstrate the effects of randomized algorithms on face recognition. Typical face recognition starts with a data base of training images, which are ... |

728 |
Application of the Karhunen–Loeve procedure for the characterization of human faces
- KirKirbyby, Sirovich
- 1990
(Show Context)
Citation Context ...s. Eigenfaces is a well studied method of face recognition based on principal component analysis (PCA), popularised by the seminal work of Turk and Pentland [78]. For more recent work and survey, see =-=[8, 48, 73, 74, 75]-=- and the references therein. In this experiment we demonstrate the effects of randomized algorithms on face recognition. Typical face recognition starts with a data base of training images, which are ... |

674 | Using linear algebra for intelligent Information Retrieval
- Berry, Dumais
- 1995
(Show Context)
Citation Context ...ion we report more numerical experimental results to shed more light on randomized algorithms. Latent Semantic Indexing (LSI) is a massive data processing application based on low-rank approximations =-=[5]-=-. A data base of terms and documents is processed to generate a term-document matrix, where each column is a document with each non-zero in the column represents the weighted number of matches to a pa... |

620 |
Numerical Methods for Large Eigenvalue Problems
- Saad
- 1992
(Show Context)
Citation Context ...assical approach for computing singular values. There is extensive convergence analysis on subspace iteration methods [30, 19, 4, 3] and a large literature on accelerated subspace iteration 2 methods =-=[68]-=-. In general, it is well-suited for fast computations on modern computers because its main computations are in terms of matrix-matrix products and QR factorizations that have been highly optimized for... |

607 |
Low-dimensional procedure for the characterization of human faces
- Sirovich, Kirby
- 1987
(Show Context)
Citation Context ...s. Eigenfaces is a well studied method of face recognition based on principal component analysis (PCA), popularised by the seminal work of Turk and Pentland [78]. For more recent work and survey, see =-=[8, 48, 73, 74, 75]-=- and the references therein. In this experiment we demonstrate the effects of randomized algorithms on face recognition. Typical face recognition starts with a data base of training images, which are ... |

534 | The university of florida sparse matrix collection
- Davis, Hu
- 2011
(Show Context)
Citation Context ... of structured matrix computations. G3circuit is a 1585478 × 1585478 sparse SPD matrix arising from circuit simulations. It is publicly available in the University of Flordia Sparse Matrix Collection =-=[18]-=-. Figure 8.3 depicts its sparsity pattern in the symmetric minimum degree ordering [29]. A direct factorization of this matrix creates a large amount of fill-in. In particular, the Schur complement of... |

455 |
The approximation of one matrix by another of lower rank
- Eckart, Young
- 1936
(Show Context)
Citation Context ...1 ar X iv :1 40 8. 22 08 v1s[ ma th. NA ]s10sA ugs20 14 real. In general, Ak is an ideal rank-k approximation to A, due to the following celebrated property of the SVD: Theorem 1.1. (Eckart and Young =-=[24]-=-, Golub and van Loan [30]) min rank(B)≤k ‖A−B‖2 = ‖A−Ak‖2 = σk+1.(1.2) min rank(B)≤k ‖A−B‖F = ‖A−Ak‖F = √√√√ n∑ j=k+1 σ2j .(1.3) Remark 1.1. While there are results similar to Theorem 1.1 for all unit... |

364 |
ARPACK Users’ Guide: Solution of LargeScale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods
- Lehoucq, Sorensen, et al.
- 1998
(Show Context)
Citation Context ...hoice of a good start matrix [4, 3]. Another classical class of approximation methods for computing an approximate SVD are the Krylov subspace methods, such as the Lanczos algorithm (see, for example =-=[10, 17, 49, 51, 69, 81]-=-.) The computational cost of these methods depends heavily on several factors, including the start vector, properties of the input matrix and the need to stabilize the algorithm. One of the most impor... |

285 |
Lanczos Algorithms for Large Symmetric Eigenvalue Computations, Volume 2: Programs (Birkhäuser
- Cullum, Willoughby
- 1985
(Show Context)
Citation Context ...hoice of a good start matrix [4, 3]. Another classical class of approximation methods for computing an approximate SVD are the Krylov subspace methods, such as the Lanczos algorithm (see, for example =-=[10, 17, 49, 51, 69, 81]-=-.) The computational cost of these methods depends heavily on several factors, including the start vector, properties of the input matrix and the need to stabilize the algorithm. One of the most impor... |

253 | Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions
- Halko, Martinsson, et al.
(Show Context)
Citation Context ... rapid low-rank matrix approximation, which is vital in many areas of scientific computing, including principal component analysis [47, 65] and face recognition [60, 78], large scale data compression =-=[21, 22, 35, 56]-=- and fast approximate algorithms for PDEs and integral equations [16, 33, 57, 71, 72, 83, 82]. In this paper, we consider randomized algorithms for low-rank approximations and singular value approxima... |

213 | Fast monte carlo algorithms for matrices ii: Computing a low-rank approximation to a matrix
- Drineas, Kannan, et al.
(Show Context)
Citation Context ... rapid low-rank matrix approximation, which is vital in many areas of scientific computing, including principal component analysis [47, 65] and face recognition [60, 78], large scale data compression =-=[21, 22, 35, 56]-=- and fast approximate algorithms for PDEs and integral equations [16, 33, 57, 71, 72, 83, 82]. In this paper, we consider randomized algorithms for low-rank approximations and singular value approxima... |

167 |
Numerical Methods in Finite Element Analysis
- Bathe, Wilson
- 1976
(Show Context)
Citation Context ...f randomized algorithms in data analysis, see [56]. The subspace iteration is a classical approach for computing singular values. There is extensive convergence analysis on subspace iteration methods =-=[30, 19, 4, 3]-=- and a large literature on accelerated subspace iteration 2 methods [68]. In general, it is well-suited for fast computations on modern computers because its main computations are in terms of matrix-m... |

165 | Improved approximation algorithms for large matrices via random projections
- Sarlos
(Show Context)
Citation Context ...ased on the QR, LU, or two-sided orthogonal (aka UTV) factorizations [14, 25, 32, 42, 59, 63, 44]. Recently, there has been an explosion of randomized algorithms for computing low-rank approximations =-=[16, 21, 22, 27, 28, 54, 53, 55, 61, 80, 70]-=-. There is also software package available for computing interpolative decompositions, a form of low-rank approximation, and for computing the PCA, with randomized sampling [58]. These algorithms are ... |

160 |
Efficient algorithms for computing a strong rank-revealing QR factorization.
- Gu, Eisenstat
- 1996
(Show Context)
Citation Context .... Many approaches have been taken in the literature for computing low-rank approximations, including rankrevealing decompositions based on the QR, LU, or two-sided orthogonal (aka UTV) factorizations =-=[14, 25, 32, 42, 59, 63, 44]-=-. Recently, there has been an explosion of randomized algorithms for computing low-rank approximations [16, 21, 22, 27, 28, 54, 53, 55, 61, 80, 70]. There is also software package available for comput... |

153 |
The evolution of the minimum degree ordering algorithm.
- George, Liu
- 1989
(Show Context)
Citation Context ...arising from circuit simulations. It is publicly available in the University of Flordia Sparse Matrix Collection [18]. Figure 8.3 depicts its sparsity pattern in the symmetric minimum degree ordering =-=[29]-=-. A direct factorization of this matrix creates a large amount of fill-in. In particular, the Schur complement of the leading 1582178 × 1582178 principal submatrix, to be called A, is a 3300 × 3300 de... |

147 |
der Vorst. Templates for the Solution of Algebraic Eigenvalue Problems
- Bai, Demmel, et al.
- 2000
(Show Context)
Citation Context ...f randomized algorithms in data analysis, see [56]. The subspace iteration is a classical approach for computing singular values. There is extensive convergence analysis on subspace iteration methods =-=[30, 19, 4, 3]-=- and a large literature on accelerated subspace iteration 2 methods [68]. In general, it is well-suited for fast computations on modern computers because its main computations are in terms of matrix-m... |

116 |
Rank revealing QR factorizations
- Chan
- 1987
(Show Context)
Citation Context ...tions in terms of rank-revealing factorizations in Section 6 and condition number estimation in Section 7. 6. Rank-revealing Factorizations. Rank-revealing factorizations were first discussed in Chan =-=[12]-=-. Generally speaking, there are rank-revealing UTV factorizations [25, 76], QR factorizations [13, 14, 32], and LU factorizations [59, 62]. While there is no uniform definition of the rank-revealing f... |

106 | A shifted Block Lanczos Algorithm for solving sparse symmetric generalized Eigen value problems”,
- Grimes, Levis, et al.
- 1994
(Show Context)
Citation Context ...o the limited data reuse involved in such operations, In fact, one focus of Krylov subspace research is on effective avoidance of matrix-vector operations in Krylov subspace methods (see, for example =-=[31, 67]-=-.) This work focuses on the surprisingly strong performance of randomized algorithms in delivering highly accurate low-rank approximations and singular values. To illustrate, we introduce Algorithm 1.... |

94 |
Randomized algorithms for the low-rank approximation of matrices
- Liberty, Woolfe, et al.
- 2007
(Show Context)
Citation Context ...ased on the QR, LU, or two-sided orthogonal (aka UTV) factorizations [14, 25, 32, 42, 59, 63, 44]. Recently, there has been an explosion of randomized algorithms for computing low-rank approximations =-=[16, 21, 22, 27, 28, 54, 53, 55, 61, 80, 70]-=-. There is also software package available for computing interpolative decompositions, a form of low-rank approximation, and for computing the PCA, with randomized sampling [58]. These algorithms are ... |

85 | Relative-error CUR matrix decompositions
- Drineas, Mahoney, et al.
(Show Context)
Citation Context ...on, which is considered an optional postprocessing step there. In this section, we discuss the pros and cons of SVD 6 truncation. We start with the following simple lemma, versions of which appear in =-=[7, 23, 35]-=-. Lemma 2.2. Given an m× ` matrix with orthonormal columns Q, with ` ≤ n, then for any `×n matrix B, ‖A−Q (QTA) ‖2 ≤ ‖A−QB‖2 and ‖A−Q (QTA) ‖F ≤ ‖A−QB‖F . Lemma 2.2 makes it obvious that any SVD trunc... |

85 |
Condition estimates
- Hager
- 1984
(Show Context)
Citation Context ... to the randomization of the start matrix. Below we concentrate on estimating ‖A‖1. Currently, Hager’s method is one of the most popular estimators for ‖A‖1, is the default 1-norm estimator of LAPACK =-=[1, 34, 37, 38]-=-. Hager’s method is based on a variant of the gradient descent method to find a local maximizer for the following optimization problem: ‖A‖1 = maxx∈S ‖Ax‖1 , where S = {x ∈ R n : ‖x‖1 ≤ 1.}(7.1) Algor... |

74 | An improved approximation algorithm for the column subset selection problem
- Boutsidis, Mahoney, et al.
(Show Context)
Citation Context ...on, which is considered an optional postprocessing step there. In this section, we discuss the pros and cons of SVD 6 truncation. We start with the following simple lemma, versions of which appear in =-=[7, 23, 35]-=-. Lemma 2.2. Given an m× ` matrix with orthonormal columns Q, with ` ≤ n, then for any `×n matrix B, ‖A−Q (QTA) ‖2 ≤ ‖A−QB‖2 and ‖A−Q (QTA) ‖F ≤ ‖A−QB‖F . Lemma 2.2 makes it obvious that any SVD trunc... |

71 | Estimating the largest eigenvalue by the power and Lanczos algorithms with a random start.
- Kuczynski, Wozniakowski
- 1992
(Show Context)
Citation Context ... little work is typically sufficient to realize an excellent low-rank approximation. The faster the singular values decay, the faster Algorithm 2.2 converges. Remark 5.6. Kuczyński and Woźniakowski =-=[46]-=- developed probabilistic error bounds for computing the largest eigenvalue of an SPD matrix by the power method for a unit start vector under the uniform distribution. Their results correspond to the ... |

67 | An implicitly restarted Lanczos method for large symmetric eigenvalue problems,
- Calvetti, Reichel, et al.
- 1994
(Show Context)
Citation Context ...hoice of a good start matrix [4, 3]. Another classical class of approximation methods for computing an approximate SVD are the Krylov subspace methods, such as the Lanczos algorithm (see, for example =-=[10, 17, 49, 51, 69, 81]-=-.) The computational cost of these methods depends heavily on several factors, including the start vector, properties of the input matrix and the need to stabilize the algorithm. One of the most impor... |

65 | A survey of condition number estimation for triangular matrices
- HIGHAM
- 1987
(Show Context)
Citation Context ...such as linear equations, least squares problems, eigenvalue/eigenvector problems, and sparse matrix problems. For a detailed discussion of condition number estimation, see the survey paper by Higham =-=[37]-=- and the references therein. More recent work includes Laub and Xia [50]. A typical condition estimator uses a matrix norm estimator to estimate ‖A‖ and ‖A−1‖ separately, and multiply them together to... |

63 |
On the compression of low rank matrices,
- Cheng, Gimbutas, et al.
- 2005
(Show Context)
Citation Context ...computing, including principal component analysis [47, 65] and face recognition [60, 78], large scale data compression [21, 22, 35, 56] and fast approximate algorithms for PDEs and integral equations =-=[16, 33, 57, 71, 72, 83, 82]-=-. In this paper, we consider randomized algorithms for low-rank approximations and singular value approximations within the subspace iteration framework, leading to results that simultaneously retain ... |

63 | A fast randomized algorithm for the approximation of matrices.
- Woolfe, Liberty, et al.
- 2008
(Show Context)
Citation Context ...ased on the QR, LU, or two-sided orthogonal (aka UTV) factorizations [14, 25, 32, 42, 59, 63, 44]. Recently, there has been an explosion of randomized algorithms for computing low-rank approximations =-=[16, 21, 22, 27, 28, 54, 53, 55, 61, 80, 70]-=-. There is also software package available for computing interpolative decompositions, a form of low-rank approximation, and for computing the PCA, with randomized sampling [58]. These algorithms are ... |

62 | Condition numbers of Gaussian random matrices
- Chen, Dongarra
- 2005
(Show Context)
Citation Context ...Algorithm S2.1 with a small but positive q value for best performance. Appendix S4. Proofs of Propositions 5.4 and 5.5. We begin with the following probability tool. iv Lemma S4.1. (Chen and Dongarra =-=[15]-=-) Let G be an m×n standard Gaussian random matrix with m ≤ n, and let f(x) denote the probability density function of ‖G†‖−22 , then f(x) satisfies: f(x) ≤ Lm,ne− x2 x 12 (n−m−1), where Lm,n = 2 n−m−1... |

60 | Randomized algorithms for matrices and data
- Mahoney
- 2011
(Show Context)
Citation Context ... rapid low-rank matrix approximation, which is vital in many areas of scientific computing, including principal component analysis [47, 65] and face recognition [60, 78], large scale data compression =-=[21, 22, 35, 56]-=- and fast approximate algorithms for PDEs and integral equations [16, 33, 57, 71, 72, 83, 82]. In this paper, we consider randomized algorithms for low-rank approximations and singular value approxima... |

56 | An implementation of a randomized algorithm for principal component analysis. arXiv:1412.3510,
- Szlam, Kluger, et al.
- 2014
(Show Context)
Citation Context ...ve established themselves as some of the most competitive methods for rapid low-rank matrix approximation, which is vital in many areas of scientific computing, including principal component analysis =-=[47, 65]-=- and face recognition [60, 78], large scale data compression [21, 22, 35, 56] and fast approximate algorithms for PDEs and integral equations [16, 33, 57, 71, 72, 83, 82]. In this paper, we consider r... |

56 |
Thick-restart Lanczos method for large symmetric eigenvalue problems.
- Wu, Simon
- 2000
(Show Context)
Citation Context |

54 | Updating a rank-revealing ULV decomposition
- Stewart
- 1993
(Show Context)
Citation Context ...tion number estimation in Section 7. 6. Rank-revealing Factorizations. Rank-revealing factorizations were first discussed in Chan [12]. Generally speaking, there are rank-revealing UTV factorizations =-=[25, 76]-=-, QR factorizations [13, 14, 32], and LU factorizations [59, 62]. While there is no uniform definition of the rank-revealing factorization, a comparison of different forms of rank-revealing factorizat... |

53 |
The variation of the spectrum of a normal matrix,
- Hoffman, Wielandt
- 1953
(Show Context)
Citation Context ... 1 such that i+ j − 1 ≤ n. The Hoffman-Wielandt theorem bounds the errors in the differences between the singular values of X and those of Y in terms of ‖X − Y ‖F . Theorem 3.3. (Hoffman and Wielandt =-=[41]-=-) Let X and Y be m× n matrices with m ≥ n. Then√√√√ n∑ j=1 |σj (X)− σj (Y )|2 ≤ ‖X − Y ‖F . Below we develop a number of theoretical results that will form the basis for our later analysis on lowrank ... |

49 | Superfast multifrontal method for large structured linear systems of equations
- Chandrasekaran, Gu, et al.
(Show Context)
Citation Context ...computing, including principal component analysis [47, 65] and face recognition [60, 78], large scale data compression [21, 22, 35, 56] and fast approximate algorithms for PDEs and integral equations =-=[16, 33, 57, 71, 72, 83, 82]-=-. In this paper, we consider randomized algorithms for low-rank approximations and singular value approximations within the subspace iteration framework, leading to results that simultaneously retain ... |

44 |
Implementation aspects of band Lanczos algorithms for computation of eigenvalues of large sparse symmetric matrices
- Ruhe
- 1979
(Show Context)
Citation Context ...o the limited data reuse involved in such operations, In fact, one focus of Krylov subspace research is on effective avoidance of matrix-vector operations in Krylov subspace methods (see, for example =-=[31, 67]-=-.) This work focuses on the surprisingly strong performance of randomized algorithms in delivering highly accurate low-rank approximations and singular values. To illustrate, we introduce Algorithm 1.... |

41 |
Some applications of the rank revealing QR factorization
- Chan, Hansen
- 1992
(Show Context)
Citation Context ...ction 7. 6. Rank-revealing Factorizations. Rank-revealing factorizations were first discussed in Chan [12]. Generally speaking, there are rank-revealing UTV factorizations [25, 76], QR factorizations =-=[13, 14, 32]-=-, and LU factorizations [59, 62]. While there is no uniform definition of the rank-revealing factorization, a comparison of different forms of rank-revealing factorizations has appeared in Foster and ... |

40 | A fast randomized algorithm for overdetermined linear least-squares regression
- Rokhlin, Tygert
(Show Context)
Citation Context ...re may be situations where singular value approximations are also desirable. In addition, it is well-known that in practical computations randomized algorithms often far outperform their error bounds =-=[35, 58, 66]-=-, whereas the results in [35] do not suggest convergence of the computed rank-k approximation to the truncated SVD in either Algorithm 1.1 or the more general randomized subspace iteration method. Our... |

32 | Subspace sampling and relative-error matrix approximation: Column-row-based methods
- Drineas, Mahoney, et al.
(Show Context)
Citation Context |

27 |
Algorithm 183. An efficient and portable pseudo-random bit generator
- Wichman, Hill
- 1982
(Show Context)
Citation Context ...4]. Given that the ”random numbers” generated on modern computers are really only pseudo random numbers that may have quite different upper tail distributions than the true Gaussian (see, for example =-=[77, 79]-=-), and given that only finite precision computations are typically done in practice, it is probably meaningless to require ∆ to be much less than 10−16, the double precision. Additionally, with this c... |

26 | A fast and efficient algorithm for low-rank approximation of a matrix.
- Nguyen, Do, et al.
- 2009
(Show Context)
Citation Context |

24 | Computing smallest singular triplets with implicitly restarted Lanczos bidiagonalization
- Kokiopoulou, Bekas, et al.
- 2004
(Show Context)
Citation Context |

24 | Gaussian random number generators
- Thomas, Luk, et al.
(Show Context)
Citation Context ...4]. Given that the ”random numbers” generated on modern computers are really only pseudo random numbers that may have quite different upper tail distributions than the true Gaussian (see, for example =-=[77, 79]-=-), and given that only finite precision computations are typically done in practice, it is probably meaningless to require ∆ to be much less than 10−16, the double precision. Additionally, with this c... |

21 | Experience with a matrix norm estimator
- Higham
- 1990
(Show Context)
Citation Context ... to the randomization of the start matrix. Below we concentrate on estimating ‖A‖1. Currently, Hager’s method is one of the most popular estimators for ‖A‖1, is the default 1-norm estimator of LAPACK =-=[1, 34, 37, 38]-=-. Hager’s method is based on a variant of the gradient descent method to find a local maximizer for the following optimization problem: ‖A‖1 = maxx∈S ‖Ax‖1 , where S = {x ∈ R n : ‖x‖1 ≤ 1.}(7.1) Algor... |

21 | Dense fast random projections and Lean Walsh transforms
- Ailon, Liberty, et al.
- 2008
(Show Context)
Citation Context |

19 |
On the existence and Computation of rank-revealing LU factorizations”:
- Pan
- 2000
(Show Context)
Citation Context ...tions. Rank-revealing factorizations were first discussed in Chan [12]. Generally speaking, there are rank-revealing UTV factorizations [25, 76], QR factorizations [13, 14, 32], and LU factorizations =-=[59, 62]-=-. While there is no uniform definition of the rank-revealing factorization, a comparison of different forms of rank-revealing factorizations has appeared in Foster and Liu [26]. For the discussions in... |

19 |
A fast direct solver for elliptic problems on general meshes
- Schmitz, Ying
(Show Context)
Citation Context ...computing, including principal component analysis [47, 65] and face recognition [60, 78], large scale data compression [21, 22, 35, 56] and fast approximate algorithms for PDEs and integral equations =-=[16, 33, 57, 71, 72, 83, 82]-=-. In this paper, we consider randomized algorithms for low-rank approximations and singular value approximations within the subspace iteration framework, leading to results that simultaneously retain ... |

18 |
On rank-revealing QR factorizations
- Chandrasekaran, Ipsen
- 1991
(Show Context)
Citation Context .... Many approaches have been taken in the literature for computing low-rank approximations, including rankrevealing decompositions based on the QR, LU, or two-sided orthogonal (aka UTV) factorizations =-=[14, 25, 32, 42, 59, 63, 44]-=-. Recently, there has been an explosion of randomized algorithms for computing low-rank approximations [16, 21, 22, 27, 28, 54, 53, 55, 61, 80, 70]. There is also software package available for comput... |

17 |
Fast Monte Carlo algorithms for finding low-rank approximations.
- Frieze, Kannan, et al.
- 2004
(Show Context)
Citation Context |

13 |
An efficient total least squares algorithm based on a rank-revealing two-sided orthogonal decomposition
- Huffel, Zha
- 1993
(Show Context)
Citation Context .... Many approaches have been taken in the literature for computing low-rank approximations, including rankrevealing decompositions based on the QR, LU, or two-sided orthogonal (aka UTV) factorizations =-=[14, 25, 32, 42, 59, 63, 44]-=-. Recently, there has been an explosion of randomized algorithms for computing low-rank approximations [16, 21, 22, 27, 28, 54, 53, 55, 61, 80, 70]. There is also software package available for comput... |

12 |
Singular value decomposition eigenfaces and 3D reconstructions
- Muller, Magaia, et al.
- 2004
(Show Context)
Citation Context ...ome of the most competitive methods for rapid low-rank matrix approximation, which is vital in many areas of scientific computing, including principal component analysis [47, 65] and face recognition =-=[60, 78]-=-, large scale data compression [21, 22, 35, 56] and fast approximate algorithms for PDEs and integral equations [16, 33, 57, 71, 72, 83, 82]. In this paper, we consider randomized algorithms for low-r... |

11 | Estimating the matrix p-norm
- Higham
- 1992
(Show Context)
Citation Context ... can choose ` = dlog2 ( 2 ∆ ) e, in which case the constants Ĉ∆ and γ above satisfy Ĉ∆ < 2e (√ n ` + 3 ) and γ ≥ ‖A‖1 2e √ n (√ n ` + 4 ) . Remark 7.2. Hager’s method has been generalized by Higham =-=[39]-=- to estimate the matrix p-norm for any p ≥ 1 and the mixed matrix norm ‖A‖α,β for α ≥ 1 and β ≥ 1. In particular, the max-norm is the special case with α = ∞ and β = 1. Algorithm 7.2 can be trivially ... |

11 | Accelerated Dense Random Projections.
- Liberty
- 2009
(Show Context)
Citation Context |

10 | On the complexity of computing error bounds
- Demmel, Diament, et al.
(Show Context)
Citation Context ...hand, it is generally expected that even estimating ‖A−1‖2 to within a constant factor independent of the matrix A must cost as much, asymptotically, as computing A−1. Demmel, Diament, and Malajovich =-=[20]-=- show that the cost of computing an estimate of ‖A−1‖ of guaranteed quality is at least the cost of testing whether the product of two n×n matrices is zero, and performing this test is conjectured to ... |

9 |
The rank revealing QR decomposition and SVD
- Hong, Pan
- 1990
(Show Context)
Citation Context |

9 | A fast randomized algorithm for computing a hierarchically semi-separable representation of a matrix. amath.colorado.edu/faculty/martinss/Pubs/2010 randomhudson.pdf
- Martinsson
- 2010
(Show Context)
Citation Context |

8 |
A sparse matrix arithmetic based on H -matrices. part I: Introduction to
- Hackbusch
- 1999
(Show Context)
Citation Context |

8 |
Robust approximate Cholesky factorization of rank-structured symmetric positive definite matrices
- Xia, Gu
(Show Context)
Citation Context |

7 |
Database of faces. http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html
- Cambridge
- 2002
(Show Context)
Citation Context ...ature vector. 3. Find the feature vector in the data base that best matches the new feature vector. Our face data are obtained from the Database of Faces maintained at the AT&T Laboratories Cambridge =-=[11]-=-. All faces are greyscale images with a consistent resolution. There are ten different images of each of 40 distinct subjects. The size of each image is 92×112 pixels, with 256 grey levels per pixel. ... |

7 | Probabilistic bounds on the extremal eigenvalues and condition number by the Lanczos algorithm
- Kuczynski, Wozniakowski
- 1994
(Show Context)
Citation Context ...l, by replacing Hager’s method in Algorithm 7.2 with its generalized version, leading to a Corollary 7.1-like conclusion for reliability. We omit the details. Remark 7.3. Kuczyński and Woźniakowski =-=[45]-=- developed probabilistic error bounds for estimating the condition number using the Lanczos algorithm for unit start vectors under the uniform distribution. However, our results appear to be much stro... |

7 |
ID: A software package for low-rank approximation of matrices via interpolative decompositions, 2008. version 0.2
- Martinsson, Rokhlin, et al.
(Show Context)
Citation Context ...28, 54, 53, 55, 61, 80, 70]. There is also software package available for computing interpolative decompositions, a form of low-rank approximation, and for computing the PCA, with randomized sampling =-=[58]-=-. These algorithms are attractive for two main reasons: they have been shown to be surprisingly efficient computationally; and like subspace methods, the main operations involved in many randomized al... |

6 | Comparison of rank revealing algorithms applied to matrices with well defined numerical ranks. Manuscript
- Foster, Liu
- 2006
(Show Context)
Citation Context ...d LU factorizations [59, 62]. While there is no uniform definition of the rank-revealing factorization, a comparison of different forms of rank-revealing factorizations has appeared in Foster and Liu =-=[26]-=-. For the discussions in this section, we make the following definition, which is loosely consistent with those in [26]. Definition 6.1. Given m × n matrices A and B and integer k < min(n,m), we call ... |

6 |
Symmetry, probability, and recognition in face space.
- Sirovich, Meytlis
- 2009
(Show Context)
Citation Context |

4 |
Gaussian Measures
- Bogdanov
- 1998
(Show Context)
Citation Context ...en rank(G) = ` − p with probability 1. For all t ≥ 1, P {∥∥G†∥∥ 2 ≥ et √ ` p+ 1 } ≤ t−(p+1). The following theorem provides classical tail bounds for functions of Gaussian matrices. It was taken from =-=[6]-=-[Thm. 4.5.7]. Theorem 5.3. Suppose that h is a real valued Lipschitz function on matrices: |h(X)− h(Y )| ≤ L‖X − Y ‖F for all X,Y and a constant L > 0. Draw a standard Gaussian matrix G. Then P {h(G) ... |

4 |
On the probability of matching DNA fingerprints.
- Risch, Devlin
- 1992
(Show Context)
Citation Context ...This choice gives ( 2 ∆ )1/(p+1) ≤ 10. For a typical choice of ∆ = 10−16, equation (5.9) gives p = 16. For this value of ∆, the exception probability is smaller than that of matching DNA fingerprints =-=[64]-=-. Given that the ”random numbers” generated on modern computers are really only pseudo random numbers that may have quite different upper tail distributions than the true Gaussian (see, for example [7... |

4 | 2013, Regularization with randomized SVD for largescale discrete inverse problems
- Xiang, Zou
(Show Context)
Citation Context ...chosen to present the algorithms in Section 2 in forms that are not identical to those in [35] for ease of stating our results in Sections 4 through 8. Versions of Algorithm 2.2 have also appeared in =-=[84]-=- for solving large-scale discrete inverse problems. 2.3. To Truncate or not to Truncate. The randomized algorithms in Section 2 are presented in a slight different form than those in [35]. One key dif... |

3 |
Low-rank revealing UTV decompositions. Numerical Algorithms
- Fierro, Hansen
- 1997
(Show Context)
Citation Context |

3 |
New efficient and robust HSS Cholesky factorization of SPD matrices
- Li, Gu, et al.
(Show Context)
Citation Context ...nt of the leading 1582178 × 1582178 principal submatrix, to be called A, is a 3300 × 3300 dense submatrix. Here we compute hierarchical semiseparable (HSS) preconditioners to A with the techniques in =-=[52]-=- and report the numbers of preconditioned conjugate gradient (PCG) steps to iteratively solve for a linear system of equations Ax = b for a random right hand side b. The PCG is a very popular techniqu... |

2 |
Preconditioned conjugate gradient methods
- Kolotilina
- 1990
(Show Context)
Citation Context ...jugate gradient (PCG) steps to iteratively solve for a linear system of equations Ax = b for a random right hand side b. The PCG is a very popular technique for solving large SPD systems of equations =-=[36, 2]-=-. We refer the reader to [52, 57] for details about the HSS matrix structure and its numerical construction, but emphasize that the key and most time-consuming step for computing HSS preconditioners i... |

1 |
Text datasets in matlab format. http://www.zjucadcg.cn/dengcai/Data/TextData.html
- Cai
- 2009
(Show Context)
Citation Context ...f Vk that is the most parallel to d. iii Table S3.1 Number of Agreements with Truncated SVD Tolerance τ q = 0 q = 2 q = 4 10−3 460 709 769 10−7 511 889 910 10−11 504 909 921 We use the TDT2 text data =-=[9]-=-. The TDT2 corpus consists of data collected during the first half of 1998 and taken from 6 sources, including 2 newswires (APW, NYT), 2 radio programs (VOA, PRI) and 2 television programs (CNN, ABC).... |

1 | Rapplications of statistical condition estimation to the solution of linear systems. Numerical Linear Algebra with Applications
- Laub, Xia
- 2008
(Show Context)
Citation Context ... problems, and sparse matrix problems. For a detailed discussion of condition number estimation, see the survey paper by Higham [37] and the references therein. More recent work includes Laub and Xia =-=[50]-=-. A typical condition estimator uses a matrix norm estimator to estimate ‖A‖ and ‖A−1‖ separately, and multiply them together to get an estimate for κ(A). A typical matrix norm estimator, in turn, onl... |

1 |
Strong rank-revealing LU factorizations
- Miranian, Gu
(Show Context)
Citation Context |

1 |
recognition by humans: 19 results all computer vision researchers should know about
- Face
(Show Context)
Citation Context |