• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Fast and robust recursive algorithms for separable nonnegative matrix factorization. arXiv preprint arXiv:1208.1237 (2012)

by N Gillis, S A Vavasis
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 30
Next 10 →

Hyperspectral Remote Sensing Data Analysis and Future Challenges

by José M. Bioucas-dias, Antonio Plaza, Gustavo Camps-valls, Paul Scheunders, Nasser M. Nasrabadi, Jocelyn Chanussot
"... Abstract—Hyperspectral remote sen-sing ..."
Abstract - Cited by 27 (5 self) - Add to MetaCart
Abstract—Hyperspectral remote sen-sing
(Show Context)

Citation Context

...purity index (PPI) [43], vertex component analysis (VCA) [44], simplex growing algorithm (SGA) [45] successive volume maximization (SVMAX) [46], and the recursive algorithm for separable NMF (RSSNMF) =-=[47]-=-; Representative algorithms of class b) are N-FINDR [48], iterative error analysis (IEA), [49], sequential maximum angle convex cone (SMACC), and alternating volume maximization (AVMAX) [46]. C. Non-p...

Fast conical hull algorithms for near-separable non-negative matrix factorization

by Abhishek Kumar, Vikas Sindhwani, Prabhanjan Kambadur - In ACM/IEEE conference on Supercomputing , 2009
"... The separability assumption (Donoho & Stodden, 2003; Arora et al., 2012a) turns non-negative matrix factorization (NMF) into a tractable problem. Recently, a new class of provably-correct NMF algorithms have emerged under this assumption. In this paper, we reformulate the separable NMF problem a ..."
Abstract - Cited by 21 (1 self) - Add to MetaCart
The separability assumption (Donoho & Stodden, 2003; Arora et al., 2012a) turns non-negative matrix factorization (NMF) into a tractable problem. Recently, a new class of provably-correct NMF algorithms have emerged under this assumption. In this paper, we reformulate the separable NMF problem as that of finding the extreme rays of the conical hull of a finite set of vectors. From this geometricperspective, we derive new separable NMF algorithms that are highly scalable and empirically noise robust, and haveseveralotherfavorablepropertiesin relation to existing methods. A parallel implementation of our algorithm demonstrates high scalability on shared- and distributedmemory machines. 1.
(Show Context)

Citation Context

...ptimality guarantees beyond convergence to a stationary point of the objective function for approximate NMF. Very recently, in a series of elegant papers (Arora et al., 2012a;b; Bittorf et al., 2012; =-=Gillis & Vavasis, 2012-=-; Esser et al., 2012; Elhamifar et al., 2012), promising alternative approaches have been developed based on certain separability assumption on the data which enables the NMF problem to be solvedFast...

A signal processing perspective on hyperspectral unmixing: Insights from remote sensing

by W. -k. Ma, J. M. Bioucas-dias, T. -h. Chan, N. Gillis, P. Gader, A. Plaza, A. Ambikapathi, C. -y. Chi - IEEE Signal Processing Magazine , 2014
"... Blind hyperspectral unmixing (HU), also known as unsuper-vised HU, is one of the most prominent research topics in sig-nal processing for hyperspectral remote sensing [1, 2]. Blind HU aims at identifying materials present in a captured scene, ..."
Abstract - Cited by 14 (7 self) - Add to MetaCart
Blind hyperspectral unmixing (HU), also known as unsuper-vised HU, is one of the most prominent research topics in sig-nal processing for hyperspectral remote sensing [1, 2]. Blind HU aims at identifying materials present in a captured scene,
(Show Context)

Citation Context

...is based on the noiseless argument. An interesting question is therefore on sensitivity against noise. A provable performance bound characterizing noise sensitivity has been proposed very recently in =-=[23]-=-, and is briefly described here. Let us denote σ = σmin(A), which is positive since {a1, . . . ,aN} is linearly independent, and K = max1≤i≤N ||ai||2. Let us also denote the noise level = max1≤n≤L |...

Sparse and unique nonnegative matrix factorization through data preprocessing

by Nicolas Gillis, Inderjit Dhillon - Journal of Machine Learning Research
"... Nonnegative matrix factorization (NMF) has become a very popular technique in machine learning because it automatically extracts meaningful features through a sparse and part-based representation. However, NMF has the drawback of being highly ill-posed, that is, there typically exist many different ..."
Abstract - Cited by 14 (6 self) - Add to MetaCart
Nonnegative matrix factorization (NMF) has become a very popular technique in machine learning because it automatically extracts meaningful features through a sparse and part-based representation. However, NMF has the drawback of being highly ill-posed, that is, there typically exist many different but equivalent factorizations. In this paper, we introduce a completely new way to obtaining more well-posed NMF problems whose solutions are sparser. Our technique is based on the preprocessing of the nonnegative input data matrix, and relies on the theory of M-matrices and the geometric interpretation of NMF. This approach provably leads to optimal and sparse solutions under the separability assumption of Donoho and Stodden (2003), and, for rank-three matrices, makes the number of exact factorizations finite. We illustrate the effectiveness of our technique on several image data sets.

R.: Robust near-separable nonnegative matrix factorization using linear optimization

by Nicolas Gillis, Robert Luce - Journal of Machine Learning Research , 2014
"... ar ..."
Abstract - Cited by 14 (4 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...at each step, it selects the column with maximum ℓ2 norm, and then projects all the columns of M̃ on the orthogonal complement of the extracted column. This algorithm was proved to be robust to noise =-=[13]-=-. (Note that there exist variants where, at each step, the column is selected according to other criteria, e.g., any ℓp norm with 1 < p < +∞. This particular version of the algorithm using ℓ2 norm act...

Robustness analysis of Hottopixx, a linear programming model for factoring nonnegative matrices

by Nicolas Gillis - SIAM Journal on Matrix Analysis and Applications , 2013
"... ar ..."
Abstract - Cited by 11 (6 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...at least one word used only by that topic; see the discussions in [1, 2]. The separability assumption is also widely used in hyperspectral imaging and is referred to as the pure-pixel assumption; see =-=[7]-=- and the references therein. In practice, the input separable matrixM is perturbed with some noise and it is therefore desirable to design robust algorithms; see [1, 2, 3, 4, 5, 7, 8]. In fact, in the...

The why and how of nonnegative matrix factorization

by Nicolas Gillis - REGULARIZATION, OPTIMIZATION, KERNELS, AND SUPPORT VECTOR MACHINES. CHAPMAN &amp; HALL/CRC , 2014
"... ..."
Abstract - Cited by 7 (1 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...ed column; see Algorithm SPA. SPA is extremely fast as it can be implemented in 2pnr + O(pr2) operations, using the formula ||(I − uuT )v||22 = ||v|| 2 2 − (u T v)2, for any u, v ∈ Rm with ||u||2 = 1 =-=[55]-=-. Moreover, if r is unknown, it can be estimated using the norm of the residual R. Algorithm SPA Successive Projection Algorithm [2] Input: Near-separable matrix X̃ =W [Ir,H ′]Π+N where W is full rank...

Greedy algorithms for pure pixels identification in hyperspectral unmixing: A multiple-measurement vector viewpoint

by Xiao Fu, Wing-kin Ma, Tsung-han Chan, Jose ́ M. Bioucas-dias, Marian-daniel Iordache - in Proc. EUSIPCO’13
"... This paper studies a multiple-measurement vector (MMV)-based sparse regression approach to blind hyperspectral un-mixing. In general, sparse regression requires a dictionary. The considered approach uses the measured hyperspectral data as the dictionary, thereby intending to represent the whole meas ..."
Abstract - Cited by 6 (3 self) - Add to MetaCart
This paper studies a multiple-measurement vector (MMV)-based sparse regression approach to blind hyperspectral un-mixing. In general, sparse regression requires a dictionary. The considered approach uses the measured hyperspectral data as the dictionary, thereby intending to represent the whole measured data using the fewest number of measured hyperspectral vectors. We tackle this self-dictionary MMV (SD-MMV) approach using greedy pursuit. It is shown that the resulting greedy algorithms are identical or very similar to some representative pure pixels identification algorithms, such as vertex component analysis. Hence, our study pro-vides a new dimension on understanding and interpreting pure pixels identification methods. We also prove that in the noiseless case, the greedy SD-MMV algorithms guaran-tee perfect identification of pure pixels when the pure pixel assumption holds. 1.
(Show Context)

Citation Context

...thm (SPA) [4], automatic target generation process (ATGP) [5], vertex component analysis (VCA) [6], and more recently, successive volume maximization (SVMAX) [7] and the recursive algorithm family in =-=[8]-=-; see [1] for a review. In this paper, we are interested in a very recent development introduced in [9, 10], where a compressive sensing formulation is used to tackle pure pixels identification. The i...

Hyperspectral image unmixing via bilinear generalized approximate message passing

by Jeremy Vila, Philip Schniter, Joseph Meola B - Proc. SPIE , 2013
"... In hyperspectral unmixing, the objective is to decompose an electromagnetic spectral dataset measured over M spectral bands and T pixels, into N constituent material spectra (or “endmembers”) with corresponding spatial abundances. In this paper, we propose a novel approach to hyperspectral unmixing ..."
Abstract - Cited by 5 (3 self) - Add to MetaCart
In hyperspectral unmixing, the objective is to decompose an electromagnetic spectral dataset measured over M spectral bands and T pixels, into N constituent material spectra (or “endmembers”) with corresponding spatial abundances. In this paper, we propose a novel approach to hyperspectral unmixing (i.e., joint estimation of endmembers and abundances) based on loopy belief propagation. In particular, we employ the bilinear generalized approximate message passing algorithm (BiG-AMP), a recently proposed belief-propagation-based approach to matrix factorization, in a “turbo ” framework that enables the exploitation of spectral coherence in the endmembers, as well as spatial coherence in the abundances. In conjunction, we propose an expectationmaximization (EM) technique that can be used to automatically tune the prior statistics assumed by turbo BiG-AMP. Numerical experiments on synthetic and real-world data confirm the state-of-the-art performance of our approach.

Hierarchical Clustering of Hyperspectral Images Using Rank-Two Nonnegative Matrix Factorization

by Nicolas Gillis, Da Kuang, Haesun Park - IEEE, Transactions on Geoscience and Remote Sensing , 2015
"... In this paper, we design a hierarchical clustering algorithm for high-resolution hyperspectral images. At the core of the algorithm, a new rank-two nonnegative matrix factorizations (NMF) algorithm is used to split the clusters, which is motivated by convex geometry concepts. The method starts with ..."
Abstract - Cited by 3 (2 self) - Add to MetaCart
In this paper, we design a hierarchical clustering algorithm for high-resolution hyperspectral images. At the core of the algorithm, a new rank-two nonnegative matrix factorizations (NMF) algorithm is used to split the clusters, which is motivated by convex geometry concepts. The method starts with a single cluster containing all pixels, and, at each step, (i) selects a cluster in such a way that the error at the next step is minimized, and (ii) splits the selected cluster into two disjoint clusters using rank-two NMF in such a way that the clusters are well balanced and stable. The proposed method can also be used as an endmember extraction algorithm in the presence of pure pixels. The effectiveness of this approach is illustrated on several synthetic and real-world hyperspectral images, and shown to outperform standard clustering techniques such as k-means, spherical k-means and standard NMF.
(Show Context)

Citation Context

...ember, that is, for all 1 ≤ k ≤ r there exists j such that M(:, j) = W (:, k). This is the so-called pure-pixel assumption. The pure-pixel assumption is equivalent to the separability assumption (see =-=[21]-=- and the references therein) which makes the corresponding NMF problem tractable, even in the presence of noise [5]. Hence, blind HU can be solved efficiently under the pure-pixel assumption. Mathemat...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University