Results 1  10
of
18
Fast nonnegative matrix factorization: An activesetlike method and comparisons
 SIAM Journal on Scientific Computing
, 2011
"... Abstract. Nonnegative matrix factorization (NMF) is a dimension reduction method that has been widelyused fornumerousapplications including text mining, computer vision, pattern discovery, and bioinformatics. A mathematical formulation for NMF appears as a nonconvex optimization problem, and variou ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
(Show Context)
Abstract. Nonnegative matrix factorization (NMF) is a dimension reduction method that has been widelyused fornumerousapplications including text mining, computer vision, pattern discovery, and bioinformatics. A mathematical formulation for NMF appears as a nonconvex optimization problem, and various types of algorithms have been devised to solve the problem. The alternating nonnegative leastsquares (ANLS)frameworkisablock coordinate descent approach forsolving NMF, which was recently shown to be theoretically sound and empiricallyefficient. In this paper, we present a novel algorithm for NMF based on the ANLS framework. Our new algorithm builds upon the block principal pivoting method for the nonnegativityconstrained least squares problem that overcomes a limitation of the active set method. We introduce ideas that efficiently extend the block principal pivoting method within the context of NMF computation. Our algorithm inherits the convergence property of the ANLS framework and can easily be extended to other constrained NMF formulations. Extensive computational comparisons using data sets that are from real life applications as well as those artificially generated show that the proposed algorithm provides stateoftheart performance in terms of computational speed.
The taxation of capital returns in overlapping generations economies without Financial assets
, 2008
"... ..."
Nonnegative Matrix Factorization: A Comprehensive Review
 IEEE TRANS. KNOWLEDGE AND DATA ENG
, 2013
"... Nonnegative Matrix Factorization (NMF), a relatively novel paradigm for dimensionality reduction, has been in the ascendant since its inception. It incorporates the nonnegativity constraint and thus obtains the partsbased representation as well as enhancing the interpretability of the issue corres ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
Nonnegative Matrix Factorization (NMF), a relatively novel paradigm for dimensionality reduction, has been in the ascendant since its inception. It incorporates the nonnegativity constraint and thus obtains the partsbased representation as well as enhancing the interpretability of the issue correspondingly. This survey paper mainly focuses on the theoretical research into NMF over the last 5 years, where the principles, basic models, properties, and algorithms of NMF along with its various modifications, extensions, and generalizations are summarized systematically. The existing NMF algorithms are divided into four categories: Basic NMF (BNMF),
ON TENSORS, SPARSITY, AND NONNEGATIVE FACTORIZATIONS
, 2012
"... Tensors have found application in a variety of fields, ranging from chemometrics to signal processing and beyond. In this paper, we consider the problem of multilinear modeling of sparse count data. Our goal is to develop a descriptive tensor factorization model of such data, along with appropriat ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
Tensors have found application in a variety of fields, ranging from chemometrics to signal processing and beyond. In this paper, we consider the problem of multilinear modeling of sparse count data. Our goal is to develop a descriptive tensor factorization model of such data, along with appropriate algorithms and theory. To do so, we propose that the random variation is best described via a Poisson distribution, which better describes the zeros observed in the data as compared to the typical assumption of a Gaussian distribution. Under a Poisson assumption, we fit a model to observed data using the negative loglikelihood score. We present a new algorithm for Poisson tensor factorization called CANDECOMP–PARAFAC alternating Poisson regression (CPAPR) that is based on a majorizationminimization approach. It can be shown that CPAPR is a generalization of the Lee–Seung multiplicative updates. We show how to prevent the algorithm from converging to nonKKT points and prove convergence of CPAPR under mild conditions. We also explain how to implement CPAPR for largescale sparse tensors and present results on several data sets, both real and simulated.
Using underapproximations for sparse nonnegative matrix factorization
 Pattern Recognition
, 2010
"... Nonnegative Matrix Factorization (NMF) has gathered a lot of attention in the last decade and has been successfully applied in numerous applications. It consists in the factorization of a nonnegative matrix by the product of two lowrank nonnegative matrices: M ≈ V W. In this paper, we attempt to so ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
Nonnegative Matrix Factorization (NMF) has gathered a lot of attention in the last decade and has been successfully applied in numerous applications. It consists in the factorization of a nonnegative matrix by the product of two lowrank nonnegative matrices: M ≈ V W. In this paper, we attempt to solve NMF problems in a recursive way. In order to do that, we introduce a new variant called Nonnegative Matrix Underapproximation (NMU) by adding the upper bound constraint V W ≤ M. Besides enabling a recursive procedure for NMF, these inequalities make NMU particularly wellsuited to achieve a sparse representation, improving the partbased decomposition. Although NMU is NPhard (which we prove using its equivalence with the maximum edge biclique problem in bipartite graphs), we present two approaches to solve it: a method based on convex reformulations and a method based on Lagrangian relaxation. Finally, we provide some encouraging numerical results for image processing applications.
Sparse and unique nonnegative matrix factorization through data preprocessing
 Journal of Machine Learning Research
"... Nonnegative matrix factorization (NMF) has become a very popular technique in machine learning because it automatically extracts meaningful features through a sparse and partbased representation. However, NMF has the drawback of being highly illposed, that is, there typically exist many different ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
(Show Context)
Nonnegative matrix factorization (NMF) has become a very popular technique in machine learning because it automatically extracts meaningful features through a sparse and partbased representation. However, NMF has the drawback of being highly illposed, that is, there typically exist many different but equivalent factorizations. In this paper, we introduce a completely new way to obtaining more wellposed NMF problems whose solutions are sparser. Our technique is based on the preprocessing of the nonnegative input data matrix, and relies on the theory of Mmatrices and the geometric interpretation of NMF. This approach provably leads to optimal and sparse solutions under the separability assumption of Donoho and Stodden (2003), and, for rankthree matrices, makes the number of exact factorizations finite. We illustrate the effectiveness of our technique on several image data sets.
On the geometric interpretation of the nonnegative rank
, 2010
"... The nonnegative rank of a nonnegative matrix is the minimum number of nonnegative rankone factors needed to reconstruct it exactly. The problem of determining this rank and computing the corresponding nonnegative factors is difficult; however it has many potential applications, e.g., in data mining ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
The nonnegative rank of a nonnegative matrix is the minimum number of nonnegative rankone factors needed to reconstruct it exactly. The problem of determining this rank and computing the corresponding nonnegative factors is difficult; however it has many potential applications, e.g., in data mining, graph theory and computational geometry. In particular, it can be used to characterize the minimal size of any extended reformulation of a given combinatorial optimization program. In this paper, we introduce and study a related quantity, called the restricted nonnegative rank. We show that computing this quantity is equivalent to a problem in polyhedral combinatorics, and fully characterize its computational complexity. This in turn sheds new light on the nonnegative rank problem, and in particular allows us to provide new improved lower bounds based on its geometric interpretation. We apply these results to slack matrices and linear Euclidean distance matrices and obtain counterexamples to two conjectures of Beasly and Laffey, namely we show that the nonnegative rank of linear Euclidean distance matrices is not necessarily equal to their dimension, and that the rank of a matrix is not always greater than the nonnegative rank of its square.
Dimensionality Reduction, Classification, and Spectral Mixture Analysis using Nonnegative Underapproximation
"... Nonnegative Matrix Factorization (NMF) and its variants have recently been successfully used as dimensionality reduction techniques for identification of the materials present in hyperspectral images. In this paper, we present a new variant of NMF called Nonnegative Matrix Underapproximation (NMU): ..."
Abstract

Cited by 8 (7 self)
 Add to MetaCart
Nonnegative Matrix Factorization (NMF) and its variants have recently been successfully used as dimensionality reduction techniques for identification of the materials present in hyperspectral images. In this paper, we present a new variant of NMF called Nonnegative Matrix Underapproximation (NMU): it is based on the introduction of underapproximation constraints which enables one to extract features in a recursive way, like PCA, but preserving nonnegativity. Moreover, we explain why these additional constraints make NMU particularly wellsuited to achieve a partsbased and sparse representation of the data, enabling it to recover the constitutive elements in hyperspectral data. We experimentally show the efficiency of this new strategy on hyperspectral images associated with space object material identification, and on HYDICE and related remote sensing images.
Document Classification Using Nonnegative Matrix Factorization and Underapproximation
 IN PROC. OF THE IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2009
"... In this study, we use nonnegative matrix factorization (NMF) and nonnegative matrix underapproximation (NMU) approaches to generate feature vectors that can be used to cluster Aviation Safety Reporting System (ASRS) documents obtained from the Distributed National ASAP Archive (DNAA). By preserving ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
In this study, we use nonnegative matrix factorization (NMF) and nonnegative matrix underapproximation (NMU) approaches to generate feature vectors that can be used to cluster Aviation Safety Reporting System (ASRS) documents obtained from the Distributed National ASAP Archive (DNAA). By preserving nonnegativity, both the NMF and NMU facilitate a sumofparts representation of the underlying term usage patterns in the ASRS document collection. Both the training and test sets of ASRS documents are parsed and then factored by both algorithms to produce a reducedrank representations of the entire document space. The resulting feature and coefficient matrix factors are used to cluster ASRS documents so that the (known) associated anomalies of training documents are directly mapped to the feature vectors. Dominant features of test documents are then used to generate anomaly relevance scores for those documents. We demonstrate that the approximate solution obtained by NMU using Lagrangrian duality can lead to a better sumofparts representation and document classification accuracy.