Results 1  10
of
28
Hyperspectral Remote Sensing Data Analysis and Future Challenges
"... Abstract—Hyperspectral remote sensing ..."
(Show Context)
Fast conical hull algorithms for nearseparable nonnegative matrix factorization
 In ACM/IEEE conference on Supercomputing
, 2009
"... The separability assumption (Donoho & Stodden, 2003; Arora et al., 2012a) turns nonnegative matrix factorization (NMF) into a tractable problem. Recently, a new class of provablycorrect NMF algorithms have emerged under this assumption. In this paper, we reformulate the separable NMF problem a ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
(Show Context)
The separability assumption (Donoho & Stodden, 2003; Arora et al., 2012a) turns nonnegative matrix factorization (NMF) into a tractable problem. Recently, a new class of provablycorrect NMF algorithms have emerged under this assumption. In this paper, we reformulate the separable NMF problem as that of finding the extreme rays of the conical hull of a finite set of vectors. From this geometricperspective, we derive new separable NMF algorithms that are highly scalable and empirically noise robust, and haveseveralotherfavorablepropertiesin relation to existing methods. A parallel implementation of our algorithm demonstrates high scalability on shared and distributedmemory machines. 1.
A signal processing perspective on hyperspectral unmixing: Insights from remote sensing
 IEEE Signal Processing Magazine
, 2014
"... Blind hyperspectral unmixing (HU), also known as unsupervised HU, is one of the most prominent research topics in signal processing for hyperspectral remote sensing [1, 2]. Blind HU aims at identifying materials present in a captured scene, ..."
Abstract

Cited by 17 (8 self)
 Add to MetaCart
(Show Context)
Blind hyperspectral unmixing (HU), also known as unsupervised HU, is one of the most prominent research topics in signal processing for hyperspectral remote sensing [1, 2]. Blind HU aims at identifying materials present in a captured scene,
Sparse and unique nonnegative matrix factorization through data preprocessing
 Journal of Machine Learning Research
"... Nonnegative matrix factorization (NMF) has become a very popular technique in machine learning because it automatically extracts meaningful features through a sparse and partbased representation. However, NMF has the drawback of being highly illposed, that is, there typically exist many different ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
Nonnegative matrix factorization (NMF) has become a very popular technique in machine learning because it automatically extracts meaningful features through a sparse and partbased representation. However, NMF has the drawback of being highly illposed, that is, there typically exist many different but equivalent factorizations. In this paper, we introduce a completely new way to obtaining more wellposed NMF problems whose solutions are sparser. Our technique is based on the preprocessing of the nonnegative input data matrix, and relies on the theory of Mmatrices and the geometric interpretation of NMF. This approach provably leads to optimal and sparse solutions under the separability assumption of Donoho and Stodden (2003), and, for rankthree matrices, makes the number of exact factorizations finite. We illustrate the effectiveness of our technique on several image data sets.
R.: Robust nearseparable nonnegative matrix factorization using linear optimization
 Journal of Machine Learning Research
, 2014
"... ar ..."
(Show Context)
Robustness analysis of Hottopixx, a linear programming model for factoring nonnegative matrices
 SIAM Journal on Matrix Analysis and Applications
, 2013
"... ar ..."
(Show Context)
Greedy algorithms for pure pixels identification in hyperspectral unmixing: A multiplemeasurement vector viewpoint
 in Proc. EUSIPCO’13
"... This paper studies a multiplemeasurement vector (MMV)based sparse regression approach to blind hyperspectral unmixing. In general, sparse regression requires a dictionary. The considered approach uses the measured hyperspectral data as the dictionary, thereby intending to represent the whole meas ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
This paper studies a multiplemeasurement vector (MMV)based sparse regression approach to blind hyperspectral unmixing. In general, sparse regression requires a dictionary. The considered approach uses the measured hyperspectral data as the dictionary, thereby intending to represent the whole measured data using the fewest number of measured hyperspectral vectors. We tackle this selfdictionary MMV (SDMMV) approach using greedy pursuit. It is shown that the resulting greedy algorithms are identical or very similar to some representative pure pixels identification algorithms, such as vertex component analysis. Hence, our study provides a new dimension on understanding and interpreting pure pixels identification methods. We also prove that in the noiseless case, the greedy SDMMV algorithms guarantee perfect identification of pure pixels when the pure pixel assumption holds. 1.
The why and how of nonnegative matrix factorization
 Regularization, Optimization, Kernels, and Support Vector Machines. Chapman & Hall/CRC
, 2014
"... ar ..."
(Show Context)
Random projections for nonnegative matrix factorization. arXiv preprint arXiv:1405.4275
, 2014
"... Nonnegative matrix factorization (NMF) is a widely used tool for exploratory data analysis in many disciplines. In this paper, we describe an approach to NMF based on random projections and give a geometric analysis of a prototypical algorithm. Our main result shows the protoalgorithm requires κ̄k ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Nonnegative matrix factorization (NMF) is a widely used tool for exploratory data analysis in many disciplines. In this paper, we describe an approach to NMF based on random projections and give a geometric analysis of a prototypical algorithm. Our main result shows the protoalgorithm requires κ̄k log k optimizations to find all the extreme columns of the matrix, where k is the number of extreme columns, and κ ̄ is a geometric condition number. We show empirically that the protoalgorithm is robust to noise and wellsuited to modern distributed computing architectures.
Hyperspectral image unmixing via bilinear generalized approximate message passing
 Proc. SPIE
, 2013
"... In hyperspectral unmixing, the objective is to decompose an electromagnetic spectral dataset measured over M spectral bands and T pixels, into N constituent material spectra (or “endmembers”) with corresponding spatial abundances. In this paper, we propose a novel approach to hyperspectral unmixing ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
In hyperspectral unmixing, the objective is to decompose an electromagnetic spectral dataset measured over M spectral bands and T pixels, into N constituent material spectra (or “endmembers”) with corresponding spatial abundances. In this paper, we propose a novel approach to hyperspectral unmixing (i.e., joint estimation of endmembers and abundances) based on loopy belief propagation. In particular, we employ the bilinear generalized approximate message passing algorithm (BiGAMP), a recently proposed beliefpropagationbased approach to matrix factorization, in a “turbo ” framework that enables the exploitation of spectral coherence in the endmembers, as well as spatial coherence in the abundances. In conjunction, we propose an expectationmaximization (EM) technique that can be used to automatically tune the prior statistics assumed by turbo BiGAMP. Numerical experiments on synthetic and realworld data confirm the stateoftheart performance of our approach.