### Citations

754 |
Matrix Analysis
- Bhatia
- 1997
(Show Context)
Citation Context ...and 3.4 we have ‖PAQ− RRLUk(PAQ)‖ = ∥∥∥∥(L11 0L21 In−k )( U11 U12 0 U22 ) − ( L11 L21 )( U11U12 )∥∥∥∥ = ‖U22‖ ≤ (k(n− k) + 1)σk+1. (3.6) The last inequality derives from Eq. 3.3. Lemma 3.3 appears in =-=[3]-=-, page 75: Lemma 3.3 ( [3]). Let A and B be two matrices and let σj(·) denotes the jth singular value of a matrix. Then, σj(AB) ≤ ‖A‖σj(B) and σj(AB) ≤ ‖B‖σj(A). Lemma 3.4 was taken from [23] and it i... |

591 |
Image denoising via sparse and redundant representations over learned dictionaries
- Elad, Aharon
(Show Context)
Citation Context ...eresting properties of a matrix. Matrix decompositions are used for solving linear equations and for finding least squares solutions. In engineering, matrix decompositions are used in computer vision =-=[13]-=-, machine learning [24], collaborative filtering and Big Data analytics [19]. As the size of the data grows exponentially, analysis of large datasets has gained an increasing importance. Such analysis... |

544 | C.: Matrix factorization techniques for recommender systems
- Koren, Bell, et al.
(Show Context)
Citation Context ...linear equations and for finding least squares solutions. In engineering, matrix decompositions are used in computer vision [13], machine learning [24], collaborative filtering and Big Data analytics =-=[19]-=-. As the size of the data grows exponentially, analysis of large datasets has gained an increasing importance. Such analysis can involve a factorization step of the input data given as a large sample-... |

259 | A supernodal approach to sparse partial pivoting
- Demmel, Eisenstat, et al.
- 1995
(Show Context)
Citation Context ...st methods that utilize sparse matrix multiplication (see [33] for example). Moreover, LU decomposition with full pivoting on sparse matrices is effective since it can generate large regions of zeros =-=[9, 10, 29]-=-. In this paper, we show how sparsity can be utilized to accelerate the computation of the randomized LU. We compare its performance to a sparse SVD algorithm performance that is based on Lanczos bidi... |

243 | Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions
- Halko, Martinsson, et al.
(Show Context)
Citation Context ...cts in videos [30], multiscale extensions for data [2] and detecting anomalies in network traffic for finding cyber attacks [8]. There are randomized versions for many different matrix factorizations =-=[17]-=- such as singular value decomposition (SVD), interpolative decomposition (ID) [7], pseudo-skeleton decomposition [16] (in which its randomized version is given by the CUR decomposition [12]), compress... |

237 |
Compressed sensing,” Information Theory
- Donoho
- 2006
(Show Context)
Citation Context ...ngular value decomposition (SVD), interpolative decomposition (ID) [7], pseudo-skeleton decomposition [16] (in which its randomized version is given by the CUR decomposition [12]), compressed sensing =-=[11]-=- and a randomized version for solving least squares problems [27]. In this paper, we present a randomized LU decomposition algorithm. Given an m × n matrix A, we seek a lower triangular m × k matrix L... |

232 | Fast Monte-Carlo algorithms for finding low-rank approximations
- Frieze, Kannan, et al.
- 1998
(Show Context)
Citation Context ... a matrix of i.i.d Gaussian random variables with zero mean and unit variance [31]; a matrix whose columns are selected randomly from the identity matrix with either uniform or nonuniform probability =-=[17, 20]-=-; a random sparse matrix designed to enable fast multiplication with a sparse input matrix A [1, 12]; ran3 dom structured matrices that use orthogonal transforms such as discrete Fourier transform, Wa... |

212 | Fast Monte Carlo Algorithms for Matrices II: Computing a Low-Rank Approximation to a Matrix. Yale University
- Drineas, Kannan, et al.
- 2004
(Show Context)
Citation Context ... a matrix of i.i.d Gaussian random variables with zero mean and unit variance [31]; a matrix whose columns are selected randomly from the identity matrix with either uniform or nonuniform probability =-=[17, 20]-=-; a random sparse matrix designed to enable fast multiplication with a sparse input matrix A [1, 12]; ran3 dom structured matrices that use orthogonal transforms such as discrete Fourier transform, Wa... |

172 | UbiCrawler: a scalable fully distributed Web crawler
- Boldi, Codenotti, et al.
(Show Context)
Citation Context ...ary sparse matrix of size 862, 664× 862, 664 with 19, 235, 140 non zero elements (ρ = 2.58× 10−5) that contains the results of crawling the .eu domain. The eu-2005 matrix was generated and studied in =-=[4]-=-. Each edge in the eu-2005 graph represents a link between two websites. The approximation error of each algorithm applied to the eu-2005 matrix is shown in Fig. 5.8, and the execution time is shown i... |

160 | Fast Computation of Low Rank Matrix Approximations
- Achlioptas, McSherry
- 2001
(Show Context)
Citation Context ... QR and ID factorizations [31], CUR decomposition as a randomized version [18] of the pseudo-skeleton decomposition, methods for solving least squares problems [2, 12, 35] and low rank approximations =-=[1, 12]-=-. In general, randomization methods for matrix factorization have two steps. First, a low-dimensional space, which captures most of the “energy" of A, is found using randomization. Then, A is projecte... |

159 |
Efficient algorithms for computing a strong rank-revealing QR factorization
- Gu, Eisenstat
- 1996
(Show Context)
Citation Context .... Other rank revealing factorizations can be used to achieve low rank approximations. For example, both QR and LU factorizations have rank revealing versions such as RRQR decompotion [8], strong RRQR =-=[23]-=- decomposition, RRLU decomposition [34] and strong RRLU decomposition [33]. Other matrix factorization methods such as Interpolative Decomposition (ID) [10] and CUR decomposition [18], use columns and... |

147 | An unsymmetric-pattern multifrontal method for sparse lu factorization.
- Davis, Duff
- 1997
(Show Context)
Citation Context ...st methods that utilize sparse matrix multiplication (see [33] for example). Moreover, LU decomposition with full pivoting on sparse matrices is effective since it can generate large regions of zeros =-=[9, 10, 29]-=-. In this paper, we show how sparsity can be utilized to accelerate the computation of the randomized LU. We compare its performance to a sparse SVD algorithm performance that is based on Lanczos bidi... |

116 |
Rank revealing QR factorizations
- Chan
- 1987
(Show Context)
Citation Context ...n) i=k+1 σ 2 i , respectively. Other rank revealing factorizations can be used to achieve low rank approximations. For example, both QR and LU factorizations have rank revealing versions such as RRQR =-=[5]-=- and RRLU [26], respectively. Rank revealing factorization uses permutation matrices on the columns and rows of A so that the factorized matrices structure have a strong rank portion and a rank defici... |

101 | Spectral regularization algorithms for learning large incomplete matrices
- Mazumder, Hastie, et al.
- 2010
(Show Context)
Citation Context ...a matrix. Matrix decompositions are used for solving linear equations and for finding least squares solutions. In engineering, matrix decompositions are used in computer vision [13], machine learning =-=[24]-=-, collaborative filtering and Big Data analytics [19]. As the size of the data grows exponentially, analysis of large datasets has gained an increasing importance. Such analysis can involve a factoriz... |

94 | Numerical linear algebra in the streaming model
- Clarkson, Woodruff
- 2009
(Show Context)
Citation Context ...ed subspace and projected matrix is factorized [24]. Several different selections exist for the random projection matrix, which is used in Step 1. For example, it can be a matrix of random signs (±1) =-=[11,30]-=-; a matrix of i.i.d Gaussian random variables with zero mean and unit variance [31]; a matrix whose columns are selected randomly from the identity matrix with either uniform or nonuniform probability... |

88 | Smallest singular value of random matrices and geometry of random polytopes
- Litvak, Pajor, et al.
(Show Context)
Citation Context ... Suppose X is distributed as in Eq. 3.8 and E is the expectation. One can easily verify that: 1. EX = 0; 2. EX2 = ρσ2; 3. E|X|3 = 4ρσ3√ 2pi ; 4. X is subgaussian. We review several facts adapted from =-=[22]-=- and [28] about random matrices whose entries are subgaussian. We focus on the case where A is a tall m × n matrix (m > (1+ 1lnn )n). Tall matrices can be used for low rank LU approximations, where th... |

88 | Smallest singular value of a random rectangular matrix
- Rudelson, Vershynin
- 2009
(Show Context)
Citation Context ...X is distributed as in Eq. 3.8 and E is the expectation. One can easily verify that: 1. EX = 0; 2. EX2 = ρσ2; 3. E|X|3 = 4ρσ3√ 2pi ; 4. X is subgaussian. We review several facts adapted from [22] and =-=[28]-=- about random matrices whose entries are subgaussian. We focus on the case where A is a tall m × n matrix (m > (1+ 1lnn )n). Tall matrices can be used for low rank LU approximations, where the approxi... |

82 | Relative-error cur matrix decompositions
- Drineas, Mahoney, et al.
(Show Context)
Citation Context ...orizations [17] such as singular value decomposition (SVD), interpolative decomposition (ID) [7], pseudo-skeleton decomposition [16] (in which its randomized version is given by the CUR decomposition =-=[12]-=-), compressed sensing [11] and a randomized version for solving least squares problems [27]. In this paper, we present a randomized LU decomposition algorithm. Given an m × n matrix A, we seek a lower... |

81 | Lanczos bidiagonalization with partial reorthogonalization
- Larsen
- 1998
(Show Context)
Citation Context ...to accelerate the computation of the randomized LU. We compare its performance to a sparse SVD algorithm performance that is based on Lanczos bidiagonalization [15] implemented in the PROPACK package =-=[20]-=-. Graphics Processing Units (GPU) are mostly used for computer games, graphics and visualization such as movies and 3D display. GPUs have powerful computation capabilities since they can do fast vecto... |

62 |
On the compression of low rank matrices
- Cheng, Gimbutas, et al.
(Show Context)
Citation Context ...network traffic for finding cyber attacks [8]. There are randomized versions for many different matrix factorizations [17] such as singular value decomposition (SVD), interpolative decomposition (ID) =-=[7]-=-, pseudo-skeleton decomposition [16] (in which its randomized version is given by the CUR decomposition [12]), compressed sensing [11] and a randomized version for solving least squares problems [27].... |

61 | Condition Numbers of Gaussian Random Matrices
- Chen, Dongarra
- 2005
(Show Context)
Citation Context ...†A−A‖ ≤ 2‖AGF −A‖+ 2‖F‖‖LU −AG‖. (4.11) Lemma 4.5 appears in [23]. It uses a lower bound for the least singular value of a Gaussian matrix with zero mean and unit variance. This bound can be found in =-=[6]-=-. Lemma 4.5 ( [23]). Assume that k, l,m and n are positive integers such that k ≤ l, l ≤ m and l ≤ n. Assume that A is a real m×n matrix, G is n× l whose entries are i.i.d Gaussian random variables of... |

54 | Four degrees of separation
- Backstrom, Boldi, et al.
(Show Context)
Citation Context ...acebook 2011 connection matrix requires to factorize a sparse matrix of size 720, 000, 000 × 720, 000, 000 with 69 billions connections. It means that only 1.33×10−5 percent of the matrix is non zero =-=[1]-=-. The advantage of using sparse matrices is its low memory consumption and the fact that some algorithms are more efficient when applied to sparse matrices. Definition 3.2 (sparse Gaussian matrix). A ... |

51 | Average-case stability of Gaussian elimination
- Trefethen, Schreiber
- 1990
(Show Context)
Citation Context ... to perform step 3 in Algorithm 4.1 using standard LU decomposition with partial pivoting instead of applying RRLU. The cases where U grows exponentially are extremely rare (section 3.4.5 in [15] and =-=[32]-=-). We now present our main error bound for Algorithm 4.1: Theorem 4.3. Given a matrix A of size m × n. Then, its randomized LU decomposition produced by Algorithm 4.1 with integers k and l (l ≥ k) sat... |

51 | Fast sparse matrix multiplication
- Yuster, Zwick
- 2005
(Show Context)
Citation Context ...rows or columns. We prove several error bounds to approximate the ‖LU − PAQ‖2 error. LU decomposition for sparse matrices can be done using fast methods that utilize sparse matrix multiplication (see =-=[33]-=- for example). Moreover, LU decomposition with full pivoting on sparse matrices is effective since it can generate large regions of zeros [9, 10, 29]. In this paper, we show how sparsity can be utiliz... |

43 |
Efficient sparse lu factorization with left-right looking strategy on shared memory multiprocessors
- Schenk, Gärtner, et al.
(Show Context)
Citation Context ...st methods that utilize sparse matrix multiplication (see [33] for example). Moreover, LU decomposition with full pivoting on sparse matrices is effective since it can generate large regions of zeros =-=[9, 10, 29]-=-. In this paper, we show how sparsity can be utilized to accelerate the computation of the randomized LU. We compare its performance to a sparse SVD algorithm performance that is based on Lanczos bidi... |

38 | A fast randomized algorithm for overdetermined linear least-squares regression
- Rokhlin, Tygert
(Show Context)
Citation Context ...) [7], pseudo-skeleton decomposition [16] (in which its randomized version is given by the CUR decomposition [12]), compressed sensing [11] and a randomized version for solving least squares problems =-=[27]-=-. In this paper, we present a randomized LU decomposition algorithm. Given an m × n matrix A, we seek a lower triangular m × k matrix L and an upper triangular k × n matrix U such that ‖LU − PAQ‖2 = O... |

38 | Blendenpick: Supercharging LAPACK’s least-squares solvers
- Avron, Maymounkov, et al.
- 2010
(Show Context)
Citation Context ...traffic for finding cyber attacks [13], to name some. There are randomized versions for many different matrix factorization algorithms [24], compressed sensing methods [16] and least squares problems =-=[2]-=-. In this paper, we develop a randomized version of the LU decomposition. Given an m × n matrix A, we seek a lower triangular m × k matrix L and an upper triangular k × n matrix U such that ∥LU − PAQ∥... |

27 |
A randomized algorithm for the decomposition of matrices
- Rokhlin, Martinsson, et al.
(Show Context)
Citation Context ...idia GPU GTX TITAN card. 5.1 Error Rate and Computational Time Comparisons The performance of the randomized LU (Algorithm 4.1) was tested and compared to a randomized SVD and to a randomized ID (see =-=[17, 23]-=-). The tests compare the normalized (relative) error of the low rank approximation obtained by the examined methods and also measure the computational time of each method. If A is the original matrix ... |

25 |
A theory of pseudoskeleton approximations. Linear Algebra and its Applications
- Goreinov, Tyrtyshnikov, et al.
- 1997
(Show Context)
Citation Context ...ttacks [8]. There are randomized versions for many different matrix factorizations [17] such as singular value decomposition (SVD), interpolative decomposition (ID) [7], pseudo-skeleton decomposition =-=[16]-=- (in which its randomized version is given by the CUR decomposition [12]), compressed sensing [11] and a randomized version for solving least squares problems [27]. In this paper, we present a randomi... |

21 |
Numerical inverting of matrices of high order
- Goldstine, Neumann
- 1951
(Show Context)
Citation Context ... two matrices and let σj(·) denotes the jth singular value of a matrix. Then, σj(AB) ≤ ‖A‖σj(B) and σj(AB) ≤ ‖B‖σj(A). Lemma 3.4 was taken from [23] and it is an equivalent formulation for Eq. 8.8 in =-=[14]-=-. Lemma 3.4 ( [23]). Suppose that G is a real n × l matrix whose entries are i.i.d Gaussian random variables with zero mean and unit variance and let m be an integer such that m ≥ l, m ≥ n, γ > 1 and ... |

19 |
et al., Discretetime signal processing
- Oppenheim, Schafer, et al.
- 1999
(Show Context)
Citation Context ... + k2n) (4.3) operations. Here, we used CA (or CAT ) as the complexity of applying A (or AT ) to a vector, since in some cases this can be done in an efficient way when for example A is an FFT matrix =-=[25]-=-. 4.2 Bounds for the Randomized LU In this section, we prove Theorem 4.3 and an additional complementary bound. This is done by finding a basis to a smaller matrix AG which is achieved in practice by ... |

17 |
On the existence and computation of rank-revealing LU factorizations. Linear Algebra and its Applications
- Pan
- 2000
(Show Context)
Citation Context ...i , respectively. Other rank revealing factorizations can be used to achieve low rank approximations. For example, both QR and LU factorizations have rank revealing versions such as RRQR [5] and RRLU =-=[26]-=-, respectively. Rank revealing factorization uses permutation matrices on the columns and rows of A so that the factorized matrices structure have a strong rank portion and a rank deficient portion. O... |

16 |
Nvidia cuda software and gpu parallel computing architecture
- Kirk
- 2007
(Show Context)
Citation Context ...ons and fast basic integer and floating arithmetics. For some computations, such as matrix multiplications and Fast Fourier Transform, GPUs can achieve up to 1, 000 speedup over a general purpose CPU =-=[18]-=-. New video cards that are based on GPUs can achieve up to several Tera-Flops per second compared to a standard CPUs that are limited to approximately 10 Giga-Flops per second. Our randomized LU algor... |

16 | Improved matrix algorithms via the subsampled randomized hadamard transform
- Boutsidis, Gittens
- 2013
(Show Context)
Citation Context ...o enable fast multiplication with a sparse input matrix A [1, 12]; ran3 dom structured matrices that use orthogonal transforms such as discrete Fourier transform, Walsh-Hadamard transform and so on ( =-=[2,7,35]-=-). In our algorithm, we use Gaussian matrices in Step 1 as well as sparse Gaussian matrices (a special case of sub-Gaussian matrices) when factorizing sparse matrices. 3 Preliminaries In this section,... |

13 | Low Rank Matrix-valued Chernoff Bounds and Approximate Matrix Multiplication
- Magen, Zouzias
- 2011
(Show Context)
Citation Context ...ed subspace and projected matrix is factorized [24]. Several different selections exist for the random projection matrix, which is used in Step 1. For example, it can be a matrix of random signs (±1) =-=[11,30]-=-; a matrix of i.i.d Gaussian random variables with zero mean and unit variance [31]; a matrix whose columns are selected randomly from the identity matrix with either uniform or nonuniform probability... |

12 | Multiscale data sampling and function extension", Applied and Computational Harmonic Analysis
- Bermanis, Averbuch, et al.
- 2012
(Show Context)
Citation Context ...he rank of G is smaller than the rank of A. Among the uses of fast randomized matrix decomposition algorithms, we find applications for tracking objects in videos [30], multiscale extensions for data =-=[2]-=- and detecting anomalies in network traffic for finding cyber attacks [8]. There are randomized versions for many different matrix factorizations [17] such as singular value decomposition (SVD), inter... |

10 | The triangular matrices of gaussian elimination and related decompositions
- Stewart
- 1995
(Show Context)
Citation Context ...such that LU is a good approximation to AG and that there exists a matrix F such that ‖AGF −A‖ is small. As for the numerical stability of L, it is always stable since it has a small condition number =-=[31]-=-. For the proof of Theorem 4.3, several theorems are needed and introduced. Lemma 4.4 states that a given basis L can form a good approximation to a matrix A by bounding ‖LL†A−A‖. Lemma 4.4. Assume th... |

8 |
Anomaly Detection and Classification via Diffusion Processes in Hyper-Networks
- David
- 2009
(Show Context)
Citation Context ...ized matrix decomposition algorithms, we find applications for tracking objects in videos [30], multiscale extensions for data [2] and detecting anomalies in network traffic for finding cyber attacks =-=[8]-=-. There are randomized versions for many different matrix factorizations [17] such as singular value decomposition (SVD), interpolative decomposition (ID) [7], pseudo-skeleton decomposition [16] (in w... |

7 |
Strong rank revealing LU factorizations. Linear algebra and its applications
- Miranian, Gu
- 2003
(Show Context)
Citation Context ...oximations. For example, both QR and LU factorizations have rank revealing versions such as RRQR decompotion [8], strong RRQR [23] decomposition, RRLU decomposition [34] and strong RRLU decomposition =-=[33]-=-. Other matrix factorization methods such as Interpolative Decomposition (ID) [10] and CUR decomposition [18], use columns and rows of the original matrix A in the factorization process. Such a proper... |

2 |
rank approximation and regression in input sparsity time
- Low
(Show Context)
Citation Context ...on to a desired rank. These include SVD, QR and ID factorizations [31], CUR decomposition as a randomized version [18] of the pseudo-skeleton decomposition, methods for solving least squares problems =-=[2, 12, 35]-=- and low rank approximations [1, 12]. In general, randomization methods for matrix factorization have two steps. First, a low-dimensional space, which captures most of the “energy" of A, is found usin... |

1 |
Smallest singular value of sparse random matrices
- Litvak, Rivasplata
(Show Context)
Citation Context ...all m × n matrix (m > (1+ 1lnn )n). Tall matrices can be used for low rank LU approximations, where the approximation rank is much smaller than the size of the matrix. Similar results can be found in =-=[21]-=- for almost square and square matrices. Definition 3.4. For parameters µ ≥ 1, a1 > 0, a2 > 0, we define A(µ, a1, a2,m, n) to be a set of all m × n (m > n) random matrices A = (ξij) whose entries are i... |

1 |
Accelerating particle filter using multiscale methods
- Shmueli, Shabat, et al.
(Show Context)
Citation Context ...roximation error, for example, when the rank of G is smaller than the rank of A. Among the uses of fast randomized matrix decomposition algorithms, we find applications for tracking objects in videos =-=[30]-=-, multiscale extensions for data [2] and detecting anomalies in network traffic for finding cyber attacks [8]. There are randomized versions for many different matrix factorizations [17] such as singu... |