#### DMCA

## 1A Correctness Result for Online Robust PCA Brian Lois, Graduate Student Member, IEEE

### Citations

7653 | Matrix Analysis - Horn, Johnson - 1985 |

678 |
The restricted isometry property and its implications for compressed sensing,” Compte Rendus de l’Academie des Sciences
- Candès
- 2008
(Show Context)
Citation Context ...e, and similarly for (a, b) etc. For a matrix A, the restricted isometry constant (RIC) δs(A) is the smallest real number δs such that (1− δs)‖x‖22 ≤ ‖Ax‖22 ≤ (1 + δs)‖x‖22 for all s-sparse vectors x =-=[21]-=-. A vector x is s-sparse if it has s or fewer non-zero entries. For Hermitian matrices A and B, the notation A B means that B − A is positive semi-definite. For a Hermitian matrixH ,H EVD= UΛU ′ den... |

560 | Robust principal component analysis
- Candès, Li, et al.
(Show Context)
Citation Context ...utliers is as sparse vectors [1]. In seminal papers Candès et. al. and Chandrasekaran et. al. introduced the Principal Components Pursuit (PCP) algorithm and proved its robustness to sparse outliers =-=[2]-=-, [3]. Principal Components Pursuit poses the robust PCA problem as identifying a low rank matrix and a sparse matrix from their sum. The algorithm is to minimize a weighted sum of the nuclear norm of... |

249 | User-friendly tail bounds for sums of random matrices, available at arXiv:1004.4389
- Tropp
- 2010
(Show Context)
Citation Context ... 6.4 we use: 1) the sin θ theorem of Davis and Kahan [23], 2) the expression for et from Lemma 6.3, 3) Lemma 6.6 that bounds the norm of a block banded matrix, and 4) the matrix Hoeffding bounds from =-=[24]-=-. Remark 6.8. Because this lemma applies for all j = 1, . . . , J , we remove the subscript j for convenience. So ζ+k refers to ζ + j,k, P̂new,k refers to P̂(j),new,k, etc. Also, P∗ refers to P(j−1) a... |

224 | Rank-sparsity incoherence for matrix decomposition
- Chandrasekaran, Sanghavi, et al.
- 2011
(Show Context)
Citation Context ...er corollary of Theorem 3.1, we will need s ≤ C log(n) and (5). Up to differences in the constants, (5) is the same requirement found in [20] (which studies the PCP program and is an improvement over =-=[3]-=-), except that [20] does not need specific bounds on s and r. The above assumptions on s and r are stronger than those used by [2] (which also studies the batch approach PCP). There s is allowed to gr... |

179 | The convex geometry of linear inverse problems
- Chandrasekaran, Recht, et al.
- 2012
(Show Context)
Citation Context ...a State University. Email: {blois,namrata}@iastate.edu. This work was partly supported by NSF grant CCF-1117125. ar X iv :1 40 9. 39 59 v1s[ cs .IT ]s13sSe p 2 01 4 2for batch robust PCA include [6], =-=[7]-=-, and [8]. All of these methods require waiting until all of the data has been acquired before performing the optimization. In this work we consider an online or recursive version of the robust PCA pr... |

170 | A framework for robust subspace learning
- Torre, Black
- 2003
(Show Context)
Citation Context ... and sparse components as they arrive, using the previous estimates, rather than re-solving the entire problem at each time t. An application where this type of problem is useful is in video analysis =-=[9]-=-. Imagine a video sequence that has a distinct background and foreground. An example might be a surveillance camera where a person walks across the scene. If the background does not change very much, ... |

86 | Robust pca via outlier pursuit
- Xu, Caramanis, et al.
- 2010
(Show Context)
Citation Context ...w rank matrix and the vector `1 norm of the sparse matrix subject to their sum being equal to the observed data matrix. Stronger results for the PCP program can be found in [4]. Other methods such as =-=[5]-=- model the entire column vector as being either correct or an outlier. Some other works on the performance guarantees B. Lois is with the Mathematics and ECpE departments, and N. Vaswani is with the E... |

75 |
The rotation of eigenvectors by a perturbation III
- Davis, Kahan
- 1970
(Show Context)
Citation Context ... The exact same argument shows ‖M2‖2 ≤ 4σ+h∗(β), and so by the triangle inequality we have ‖M‖2 ≤ 8σ+h∗(β). 21 C. Proof of Lemma 6.4 To prove Lemma 6.4 we use: 1) the sin θ theorem of Davis and Kahan =-=[23]-=-, 2) the expression for et from Lemma 6.3, 3) Lemma 6.6 that bounds the norm of a block banded matrix, and 4) the matrix Hoeffding bounds from [24]. Remark 6.8. Because this lemma applies for all j = ... |

74 |
Recovering low-rank and sparse components of matrices from incomplete and noisy observations
- Tao, Yuan
(Show Context)
Citation Context ...niversity. Email: {blois,namrata}@iastate.edu. This work was partly supported by NSF grant CCF-1117125. ar X iv :1 40 9. 39 59 v1s[ cs .IT ]s13sSe p 2 01 4 2for batch robust PCA include [6], [7], and =-=[8]-=-. All of these methods require waiting until all of the data has been acquired before performing the optimization. In this work we consider an online or recursive version of the robust PCA problem whe... |

63 | Finite sample approximation results for principal component analysis: a matrix perturbation approach
- Nadler
(Show Context)
Citation Context ...ecause in our PCA step, the error between the estimated value of `t and its true value is correlated with `t. Almost all existing work on finite sample PCA assumes that the two are uncorrelated, e.g. =-=[22]-=-. Our proof is inspired by that of [11], but we need a new approach to analyze the subspace estimate update step (step 3 in Algorithm 1) in order to remove the assumption on intermediate algorithm est... |

43 | Robust matrix decomposition with sparse corruptions
- Hsu, Kakade, et al.
- 2011
(Show Context)
Citation Context ...key assumption needed by the batch methods (that of uniformly distributed random supports or of very frequent support change). A more detailed comparison of our results with [2], [3], [18], [19], and =-=[20]-=- is given in Section V. Other work only provides an algorithm without proving any performance results; for example [13], [14]. Our result uses the overall proof approach of [11] as its starting point.... |

40 | Dense error correction via l1-minimization
- Wright, Ma
- 2010
(Show Context)
Citation Context ...utliers in the data set. Recently there has been much work done to develop and analyze algorithms for PCA that are robust with respect to outliers. A common way to model outliers is as sparse vectors =-=[1]-=-. In seminal papers Candès et. al. and Chandrasekaran et. al. introduced the Principal Components Pursuit (PCP) algorithm and proved its robustness to sparse outliers [2], [3]. Principal Components P... |

33 |
Low-rank matrix recovery from errors and erasures
- Chen, Jalali, et al.
- 2013
(Show Context)
Citation Context ... the nuclear norm of the low rank matrix and the vector `1 norm of the sparse matrix subject to their sum being equal to the observed data matrix. Stronger results for the PCP program can be found in =-=[4]-=-. Other methods such as [5] model the entire column vector as being either correct or an outlier. Some other works on the performance guarantees B. Lois is with the Mathematics and ECpE departments, a... |

22 | Dynamic anomalography: Tracking network anomalies via sparsity and low
- Mardani, Mateos, et al.
- 2013
(Show Context)
Citation Context ... assumptions), then separating the background and foreground can be viewed as a robust PCA problem. Sparse plus low rank decomposition can also be used to detect anomalies in network traffic patterns =-=[10]-=-. In all such an applications an online solution is desirable. A. Contributions To the best of our knowledge, this is among the first works that provides a correctness result for an online (recursive)... |

19 |
Sharp recovery bounds for convex deconvolution, with applications
- McCoy, Tropp
- 2012
(Show Context)
Citation Context ...t Iowa State University. Email: {blois,namrata}@iastate.edu. This work was partly supported by NSF grant CCF-1117125. ar X iv :1 40 9. 39 59 v1s[ cs .IT ]s13sSe p 2 01 4 2for batch robust PCA include =-=[6]-=-, [7], and [8]. All of these methods require waiting until all of the data has been acquired before performing the optimization. In this work we consider an online or recursive version of the robust P... |

19 | Online robust subspace tracking from partial information. arXiv:1109.3827
- He, Balzano, et al.
- 2011
(Show Context)
Citation Context ... the ReProCS algorithm introduced in [11]. As shown in [12], with practical heuristics used to set its parameters, ReProCS has significantly improved recovery performance compared to other recursive (=-=[13]-=-, [14], [10]) and even batch methods ([2], [9], [14]) for many simulated and real video datasets. Online algorithms are needed for real-time applications such as video surveillance or for other stream... |

18 | Recursive robust PCA or recursive sparse recovery in large but structured noise
- Qiu, Vaswani, et al.
- 2013
(Show Context)
Citation Context ... knowledge, this is among the first works that provides a correctness result for an online (recursive) algorithm for sparse plus low-rank matrix recovery. We study the ReProCS algorithm introduced in =-=[11]-=-. As shown in [12], with practical heuristics used to set its parameters, ReProCS has significantly improved recovery performance compared to other recursive ([13], [14], [10]) and even batch methods ... |

18 | Online robust pca via stochastic optimization
- Feng, Xu, et al.
- 2013
(Show Context)
Citation Context ...apers [15], [16]; however, none of these results is a correctness result. All require an assumption that 3depends on intermediate algorithm estimates. Recent work of Feng et. al. from NIPS 2013 [17], =-=[18]-=- provides partial results for online robust PCA. One of these papers, [17], does not model the outlier as a sparse vector; [18] does, but it again contains a partial result. Moreover the theorems in b... |

14 | Robust pca as bilinear decomposition with outlier-sparsity regularization
- Mateos, Giannakis
- 2012
(Show Context)
Citation Context ...eProCS algorithm introduced in [11]. As shown in [12], with practical heuristics used to set its parameters, ReProCS has significantly improved recovery performance compared to other recursive ([13], =-=[14]-=-, [10]) and even batch methods ([2], [9], [14]) for many simulated and real video datasets. Online algorithms are needed for real-time applications such as video surveillance or for other streaming vi... |

10 | On convergence properties of pocket algorithm
- Muselli
- 1997
(Show Context)
Citation Context ...uring an interval Ju. Then the assumed bound s ≤ n 2α implies that the the support changes fewer than n 2s times during an interval Ju. So 1) occurs with probability 1. For 2) we have by the bound in =-=[26]-=- P ( The object moves at least once every α 200 instants in the interval Ju ) = P ( The bit sequence θ(u−1)α . . . θuα−1 does not contain a sequence of α 200 consecutive zeros ) ≥ (1− (1− q) α200 )α− ... |

9 | An online algorithm for separating sparse and low-dimensional signal sequences from their sum,”
- Guo, Qiu, et al.
- 2014
(Show Context)
Citation Context ...is done (see Algorithm 1). 8B. The ReProCS Algorithm The ReProCS algorithm presented here was introduced in [11]. A more practical version including heuristics for setting the parameters was given in =-=[12]-=-. The basic idea of ReProCS is as follows. Given an accurate estimate of the subspace where the `t’s lie, projecting the measurement mt = xt + `t onto the orthogonal complement of the estimated subspa... |

6 | Online pca for contaminated data
- Feng, Xu, et al.
- 2013
(Show Context)
Citation Context ...w up papers [15], [16]; however, none of these results is a correctness result. All require an assumption that 3depends on intermediate algorithm estimates. Recent work of Feng et. al. from NIPS 2013 =-=[17]-=-, [18] provides partial results for online robust PCA. One of these papers, [17], does not model the outlier as a sparse vector; [18] does, but it again contains a partial result. Moreover the theorem... |

5 | Robust pca with partial subspace knowledge,”
- Zhan, Vaswani
- 2014
(Show Context)
Citation Context ...y relax a key assumption needed by the batch methods (that of uniformly distributed random supports or of very frequent support change). A more detailed comparison of our results with [2], [3], [18], =-=[19]-=-, and [20] is given in Section V. Other work only provides an algorithm without proving any performance results; for example [13], [14]. Our result uses the overall proof approach of [11] as its start... |

4 | Performance guarantees for undersampled recursive sparse recovery in large but structured noise
- Lois, Vaswani, et al.
- 2013
(Show Context)
Citation Context ... the sparse and low-rank matrix columns can be recovered with bounded and small error. Partial results have been provided for ReProCS in the recent work of Qiu et. al. [11] and follow up papers [15], =-=[16]-=-; however, none of these results is a correctness result. All require an assumption that 3depends on intermediate algorithm estimates. Recent work of Feng et. al. from NIPS 2013 [17], [18] provides pa... |

2 | Performance guarantees for ReProCS— Correlated low-rank matrix entries case
- Zhan, Vaswani
- 2014
(Show Context)
Citation Context .... Also the sparse and low-rank matrix columns can be recovered with bounded and small error. Partial results have been provided for ReProCS in the recent work of Qiu et. al. [11] and follow up papers =-=[15]-=-, [16]; however, none of these results is a correctness result. All require an assumption that 3depends on intermediate algorithm estimates. Recent work of Feng et. al. from NIPS 2013 [17], [18] provi... |