Results 11  20
of
90
Nonnegative Matrix Factorization via RankOne Downdate
"... Nonnegative matrix factorization (NMF) was popularized as a tool for data mining by Lee and Seung in 1999. NMF attempts to approximate a matrix with nonnegative entries by a product of two lowrank matrices, also with nonnegative entries. We propose an algorithm called rankone downdate (R1D) for co ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Nonnegative matrix factorization (NMF) was popularized as a tool for data mining by Lee and Seung in 1999. NMF attempts to approximate a matrix with nonnegative entries by a product of two lowrank matrices, also with nonnegative entries. We propose an algorithm called rankone downdate (R1D) for computing an NMF that is partly motivated by the singular value decomposition. This algorithm computes the dominant singular values and vectors of adaptively determined submatrices of a matrix. On each iteration, R1D extracts a rankone submatrix from the original matrix according to an objective function. We establish a theoretical result that maximizing this objective function corresponds to correctly classifying articles in a nearly separable corpus. We also provide computational experiments showing the success of this method in identifying features in realistic datasets. The method is also much faster than other NMF routines. 1. Nonnegative Matrix Factorization Several problems in information retrieval can be posed as lowrank matrix approximation. The seminal paper by Deerwester et al. (1990) on latent semantic indexing (LSI) showed that approximating a termdocument matrix describing a corpus of articles via the SVD led to powerful query and classification techniques. A drawback of LSI is that the lowrank factors in general will have both positive and negative entries, and there is no obvious statistical interpretation of the negative entries. This led Lee and Seung (1999) among others to propose nonnegative matrix
Using underapproximations for sparse nonnegative matrix factorization
 Pattern Recognition
, 2010
"... Nonnegative Matrix Factorization (NMF) has gathered a lot of attention in the last decade and has been successfully applied in numerous applications. It consists in the factorization of a nonnegative matrix by the product of two lowrank nonnegative matrices: M ≈ V W. In this paper, we attempt to so ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
Nonnegative Matrix Factorization (NMF) has gathered a lot of attention in the last decade and has been successfully applied in numerous applications. It consists in the factorization of a nonnegative matrix by the product of two lowrank nonnegative matrices: M ≈ V W. In this paper, we attempt to solve NMF problems in a recursive way. In order to do that, we introduce a new variant called Nonnegative Matrix Underapproximation (NMU) by adding the upper bound constraint V W ≤ M. Besides enabling a recursive procedure for NMF, these inequalities make NMU particularly wellsuited to achieve a sparse representation, improving the partbased decomposition. Although NMU is NPhard (which we prove using its equivalence with the maximum edge biclique problem in bipartite graphs), we present two approaches to solve it: a method based on convex reformulations and a method based on Lagrangian relaxation. Finally, we provide some encouraging numerical results for image processing applications.
Sparse and unique nonnegative matrix factorization through data preprocessing
 Journal of Machine Learning Research
"... Nonnegative matrix factorization (NMF) has become a very popular technique in machine learning because it automatically extracts meaningful features through a sparse and partbased representation. However, NMF has the drawback of being highly illposed, that is, there typically exist many different ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
(Show Context)
Nonnegative matrix factorization (NMF) has become a very popular technique in machine learning because it automatically extracts meaningful features through a sparse and partbased representation. However, NMF has the drawback of being highly illposed, that is, there typically exist many different but equivalent factorizations. In this paper, we introduce a completely new way to obtaining more wellposed NMF problems whose solutions are sparser. Our technique is based on the preprocessing of the nonnegative input data matrix, and relies on the theory of Mmatrices and the geometric interpretation of NMF. This approach provably leads to optimal and sparse solutions under the separability assumption of Donoho and Stodden (2003), and, for rankthree matrices, makes the number of exact factorizations finite. We illustrate the effectiveness of our technique on several image data sets.
Sparse nonnegative tensor factorization using columnwise coordinate descent
 Pattern Recognition
, 2012
"... Many applications in computer vision, biomedical informatics, and graphics deal with data in the matrix or tensor form. Nonnegative matrix and tensor factorization, which extract datadependent nonnegative basis functions, have been commonly applied for the analysis of such data for data compres ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
(Show Context)
Many applications in computer vision, biomedical informatics, and graphics deal with data in the matrix or tensor form. Nonnegative matrix and tensor factorization, which extract datadependent nonnegative basis functions, have been commonly applied for the analysis of such data for data compression, visualization, and detection of hidden information (factors). In this paper, we present a fast and flexible algorithm for sparse nonnegative tensor factorization (SNTF) based on columnwise coordinate descent (CCD). Different from the traditional coordinate descent which updates one element at a time, CCD updates one column vector simultaneously. Our empirical results on highermode images, such as brain MRI images, gene expression images, and hyperspectral images show that the proposed algorithm is 12 orders of magnitude faster than several stateoftheart algorithms. Key words:
Fast Singular Value Thresholding without Singular Value Decomposition
"... Singularvaluethresholding(SVT)is abasic subroutineinmanypopularnumerical schemes for solving nuclearnormminimization thatarises fromlowrankmatrixrecoveryproblemssuchasmatrixcompletion. The conventional approach for SVT is first to find the singular value decomposition (SVD) and then to shrink the s ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
Singularvaluethresholding(SVT)is abasic subroutineinmanypopularnumerical schemes for solving nuclearnormminimization thatarises fromlowrankmatrixrecoveryproblemssuchasmatrixcompletion. The conventional approach for SVT is first to find the singular value decomposition (SVD) and then to shrink the singular values. However, such an approach is timeconsuming under some circumstances, especially when the rank of the resulting matrix is not significantly low compared to its dimension. In this paper, we propose a fast algorithm for directly computing SVT for general dense matrices without usingSVDs. Ouralgorithm isbasedonmatrixNewtoniteration for matrixfunctions, andtheconvergence is theoretically guaranteed. Numerical experiments show that our proposed algorithm is more efficient than the SVDbased approaches for general dense matrices. 1
Sparse latent semantic analysis
 InWorkshop on Neural Information Processing Systems
, 2010
"... Latent semantic analysis (LSA), as one of the most popular unsupervised dimension reduction tools, has a wide range of applications in text mining and information retrieval. The key idea of LSA is to learn a projection matrix that maps the high dimensional vector space representations of documents ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
Latent semantic analysis (LSA), as one of the most popular unsupervised dimension reduction tools, has a wide range of applications in text mining and information retrieval. The key idea of LSA is to learn a projection matrix that maps the high dimensional vector space representations of documents to a lower dimensional latent space, i.e. so called latent topic space. In this paper, we propose a new model called Sparse LSA, which produces a sparse projection matrix via the `1 regularization. Compared to the traditional LSA, Sparse LSA selects only a small number of relevant words for each topic and hence provides a compact representation of topicword relationships. Moreover, Sparse LSA is computationally very efficient with much less memory usage for storing the projection matrix. Furthermore, we propose two important extensions of Sparse LSA: group structured Sparse LSA and nonnegative Sparse LSA. We conduct experiments on several benchmark datasets and compare Sparse LSA and its extensions with several widely used methods, e.g. LSA, Sparse Coding and LDA. Empirical results suggest that Sparse LSA achieves similar performance gains to LSA, but is more efficient in projection computation, storage, and also well explain the topicword relationships. 1
Nonnegative Tensor Factorization Based on Alternating Largescale Nonnegativityconstrained Least Squares
"... Abstract — Nonnegative matrix factorization (NMF) and nonnegative tensor factorization (NTF) have attracted much attention and have been successfully applied to numerous data analysis problems where the components of the data are necessarily nonnegative such as chemical concentrations in experime ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
(Show Context)
Abstract — Nonnegative matrix factorization (NMF) and nonnegative tensor factorization (NTF) have attracted much attention and have been successfully applied to numerous data analysis problems where the components of the data are necessarily nonnegative such as chemical concentrations in experimental results or pixels in digital images. Especially, Andersson and Bro’s PARAFAC algorithm with nonnegativity constraints (ABPARAFACNC) provided the stateoftheart NTF algorithm, which uses Bro and de Jong’s nonnegativityconstrained least squares with single right hand side (NLS/SRHS). However, solving an NLS with multiple right hand sides (NLS/MRHS) problem by multiple NLS/SRHS problems is not recommended due to hidden redundant computation. In this paper, we propose an NTF algorithm based on alternating largescale nonnegativityconstrained least squares (NTF/ANLS) using NLS/MRHS. In addition, we introduce an algorithm for the regularized NTF based on ANLS (RNTF/ANLS). Our experiments illustrate that our NTF algorithms outperform ABPARAFACNC in terms of computing speed on several data sets we tested. I.
Efficient Nonnegative Matrix Factorization with Random Projections
"... The recent years have witnessed a surge of interests in Nonnegative Matrix Factorization (NMF) in data mining and machine learning fields. Despite its elegant theory and empirical success, one of the limitations of NMF based algorithms is that it needs to store the whole data matrix in the entire pr ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
The recent years have witnessed a surge of interests in Nonnegative Matrix Factorization (NMF) in data mining and machine learning fields. Despite its elegant theory and empirical success, one of the limitations of NMF based algorithms is that it needs to store the whole data matrix in the entire process, which requires expensive storage and computation costs when the data set is large and highdimensional. In this paper, we propose to apply the random projection techniques to accelerate the NMF process. Both theoretical analysis and experimental validations will be presented to demonstrate the effectiveness of the proposed strategy. 1
1 Visual Tracking via Online Nonnegative Matrix Factorization
"... Abstract—In visual tracking, holistic and partbased representations are both popular choices to model target appearance. The former is known for great efficiency and convenience while the latter for robustness against local appearance or shape variations. Based on nonnegative matrix factorization ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
Abstract—In visual tracking, holistic and partbased representations are both popular choices to model target appearance. The former is known for great efficiency and convenience while the latter for robustness against local appearance or shape variations. Based on nonnegative matrix factorization (NMF), we propose a novel visual tracker that takes advantage of both groups. The idea is to model the target appearance by a nonnegative combination of nonnegative components learned from examples observed in previous frames. To adjust NMF to the tracking context, we include sparsity and smoothness constraints in addition to the nonnegativity one. Furthermore, an online iterative learning algorithm, together with a proof of convergence, is proposed for efficient model updating. Putting these ingredients together with a particle filter framework, the proposed tracker, Constrained Online Nonnegative Matrix Factorization (CONMF), achieves robustness to challenging appearance variations and nontrivial deformations while runs in real time. We evaluate the proposed tracker on various benchmark sequences containing targets undergoing large variations in scale, pose or illumination. The robustness and efficiency of CONMF is validated in comparison with several stateoftheart trackers.
Fast Nonnegative Tensor Factorization with an ActiveSetLike Method
"... Abstract We introduce an efficient algorithm for computing a lowranknonnegativeCANDECOMP/PARAFAC(NNCP)decomposition.Intextmining, signal processing, and computer vision among other areas, imposing nonnegativity constraints to the lowrank factors of matrices and tensors has been shown an effective ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Abstract We introduce an efficient algorithm for computing a lowranknonnegativeCANDECOMP/PARAFAC(NNCP)decomposition.Intextmining, signal processing, and computer vision among other areas, imposing nonnegativity constraints to the lowrank factors of matrices and tensors has been shown an effective technique providing physically meaningful interpretation. A principled methodology for computing NNCP is alternating nonnegative least squares, in which the nonnegativityconstrained least squares (NNLS) problems are solved in each iteration. In this chapter, we propose to solve the NNLS problems using the block principal pivoting method. The block principal pivoting method overcomes some difficulties of the classical active method for the NNLS problems with a large number of variables. We introducetechniquestoacceleratetheblockprincipalpivotingmethodformultiple righthand sides, which is typical in NNCP computation. Computational experiments show the stateoftheart performance of the proposed method. 1