• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Online low-rank subspace clustering by basis dictionary pursuit. arXiv preprint arXiv:1503.08356, (2015)

by J Shen, P Li, H Xu
Add To MetaCart

Tools

Sorted by:
Results 1 - 1 of 1

Learning Structured Low-Rank Representation via Matrix Factorization

by Jie Shen , Ping Li
"... Abstract A vast body of recent works in the literature have shown that exploring structures beyond data lowrankness can boost the performance of subspace clustering methods such as Low-Rank Representation (LRR). It has also been well recognized that the matrix factorization framework might offer mo ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
Abstract A vast body of recent works in the literature have shown that exploring structures beyond data lowrankness can boost the performance of subspace clustering methods such as Low-Rank Representation (LRR). It has also been well recognized that the matrix factorization framework might offer more flexibility on pursuing underlying structures of the data. In this paper, we propose to learn structured LRR by factorizing the nuclear norm regularized matrix, which leads to our proposed non-convex formulation NLRR. Interestingly, this formulation of NLRR provides a general framework for unifying a variety of popular algorithms including LRR, dictionary learning, robust principal component analysis, sparse subspace clustering, etc. Several variants of NLRR are also proposed, for example, to promote sparsity while preserving low-rankness. We design a practical algorithm for NLRR and its variants, and establish theoretical guarantee for the stability of the solution and the convergence of the algorithm. Perhaps surprisingly, the computational and memory cost of NLRR can be reduced by roughly one order of magnitude compared to the cost of LRR. Experiments on extensive simulations and real datasets confirm the robustness of efficiency of NLRR and the variants.
(Show Context)

Citation Context

...n attain the same minimal objective value. In the sequel, we will pose our main observation which establishes the connection between LRR, RPCA [7], DL [28] and SSC [11], hence motivating new variants of NLRR. Crucial Observation 1. Putting D = AU ∈ Rp×d gives C = DV ⊤. (2.4) 501 Jie Shen, Ping Li Since C is the clean data, D can be accounted as a “basis dictionary” of the multiple subspaces and V is the associated coefficients, with v(j) being the coefficients for cj . Also by the above equation, we know that d should be chosen as large as the rank of C. Remark. Similar idea was also shown in [35], but they focused on online algorithm for LRR while this work is devoted to structured LRR. Another difference is that the theoretical analysis of [35] was carried out to establish the convergence of online LRR, which that of our work justifies the stability of NLRR and its variants. Plugging (2.4) back to (2.3), we have min D,U,V,E β 2 ∥∥Z −DV ⊤ − E ∥∥2 F + 1 2 ∥U∥2F + 1 2 ∥V ∥2F + λ ∥E∥1 , s.t. D = AU. (2.5) Let DNLRR = {D |D = AU, ∥U∥F ≤ u} and carefully choosing u, we have an equivalent problem to the above: min D,U,V,E β 2 ∥∥Z −DV ⊤ − E ∥∥2 F + 1 2 ∥V ∥2F + λ ∥E∥1 , s.t. D ∈ DNLRR. (2....

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University