Results 1 
3 of
3
Robust pca with partial subspace knowledge,”
 in IEEE Intl. Symp. on Information Theory (ISIT),
, 2014
"... AbstractIn recent work, robust Principal Components Analysis (PCA) has been posed as a problem of recovering a lowrank matrix L and a sparse matrix S from their sum, M := L + S and a provably exact convex optimization solution called PCP has been proposed. This work studies the following problem. ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
AbstractIn recent work, robust Principal Components Analysis (PCA) has been posed as a problem of recovering a lowrank matrix L and a sparse matrix S from their sum, M := L + S and a provably exact convex optimization solution called PCP has been proposed. This work studies the following problem. Suppose that we have partial knowledge about the column space of the low rank matrix L. Can we use this information to improve the PCP solution, i.e. allow recovery under weaker assumptions? We propose here a simple but useful modification of the PCP idea, called modifiedPCP, that allows us to use this knowledge. We derive its correctness result which shows that, when the available subspace knowledge is accurate, modifiedPCP indeed requires significantly weaker incoherence assumptions than PCP. Extensive simulations are also used to illustrate this. Comparisons with PCP and other existing work are shown for a stylized real application as well. Finally, we explain how this problem naturally occurs in many applications involving time series data, i.e. in what is called the online or recursive robust PCA problem. A corollary for this case is also given.
Background Subtraction via Generalized Fused Lasso Foreground Modeling
"... Background Subtraction (BS) is one of the key steps in video analysis. Many background models have been proposed and achieved promising performance on public data sets. However, due to challenges such as illumination change, dynamic background etc. the resulted foreground segmentation often consists ..."
Abstract
 Add to MetaCart
(Show Context)
Background Subtraction (BS) is one of the key steps in video analysis. Many background models have been proposed and achieved promising performance on public data sets. However, due to challenges such as illumination change, dynamic background etc. the resulted foreground segmentation often consists of holes as well as background noise. In this regard, we consider generalized fused lasso regularization to quest for intact structured foregrounds. Together with certain assumptions about the background, such as the lowrank assumption or the sparsecomposition assumption (depending on whether pure background frames are provided), we formulate BS as a matrix decomposition problem using regularization terms for both the foreground and background matrices. Moreover, under the proposed formulation, the two generally distinctive background assumptions can be solved in a unified manner. The optimization was carried out via applying the augmented Lagrange multiplier (ALM) method in such a way that a fast parametricflow algorithm is used for updating the foreground matrix. Experimental results on several popular BS data sets demonstrate the advantage of the proposed model compared to stateofthearts. 1.
Spectral Clustering with a Convex Regularizer on Millions of Images
"... Abstract. This paper focuses on efficient algorithms for single and multiview spectral clustering with a convex regularization term for very large scale image datasets. In computer vision applications, multiple views denote distinct imagederived feature representations that inform the clustering. ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. This paper focuses on efficient algorithms for single and multiview spectral clustering with a convex regularization term for very large scale image datasets. In computer vision applications, multiple views denote distinct imagederived feature representations that inform the clustering. Separately, the regularization encodes high level advice such as tags or user interaction in identifying similar objects across examples. Depending on the specific task, schemes to exploit such information may lead to a smooth or nonsmooth regularization function. We present stochastic gradient descent methods for optimizing spectral clustering objectives with such convex regularizers for datasets with up to a hundred million examples. We prove that under mild conditions the local convergence rate is O(1/ T) where T is the number of iterations; further, our analysis shows that the convergence improves linearly by increasing the number of threads. We give extensive experimental results on a range of vision datasets demonstrating the algorithm’s empirical behavior. 1