Results 1  10
of
45
NONNEGATIVE MATRIX FACTORIZATION BASED ON ALTERNATING NONNEGATIVITY CONSTRAINED LEAST SQUARES AND ACTIVE SET METHOD
"... The nonnegative matrix factorization (NMF) determines a lower rank approximation of a ¢¤£¦¥¨§�©���� �� � matrix where an ������������������ � interger is given and nonnegativity is imposed on all components of the factors applied to numerous data analysis problems. In applications where the compone ..."
Abstract

Cited by 86 (7 self)
 Add to MetaCart
(Show Context)
The nonnegative matrix factorization (NMF) determines a lower rank approximation of a ¢¤£¦¥¨§�©���� �� � matrix where an ������������������ � interger is given and nonnegativity is imposed on all components of the factors applied to numerous data analysis problems. In applications where the components of the data are necessarily nonnegative such as chemical concentrations in experimental results or pixels in digital images, the NMF provides a more relevant interpretation of the results since it gives nonsubtractive combinations of nonnegative basis vectors. In this paper, we introduce an algorithm for the NMF based on alternating nonnegativity constrained least squares (NMF/ANLS) and the active set based fast algorithm for nonnegativity constrained least squares with multiple right hand side vectors, and discuss its convergence properties and a rigorous convergence criterion based on the KarushKuhnTucker (KKT) conditions. In addition, we also describe algorithms for sparse NMFs and regularized NMF. We show how we impose a sparsity constraint on one of the factors by �� �norm minimization and discuss its convergence properties. Our algorithms are compared to other commonly used NMF algorithms in the literature on several test data sets in terms of their convergence behavior. £�¥�§�©� � and � £�¥���©� �. The NMF has attracted much attention for over a decade and has been successfully
SVD based initialization: A head start for nonnegative matrix factorization
 PATTERN RECOGNITION
, 2007
"... ..."
Toward Faster Nonnegative Matrix Factorization: A New Algorithm and Comparisons
"... Nonnegative Matrix Factorization (NMF) is a dimension reduction method that has been widely used for various tasks including text mining, pattern analysis, clustering, and cancer class discovery. The mathematical formulation for NMF appears as a nonconvex optimization problem, and various types of ..."
Abstract

Cited by 40 (5 self)
 Add to MetaCart
(Show Context)
Nonnegative Matrix Factorization (NMF) is a dimension reduction method that has been widely used for various tasks including text mining, pattern analysis, clustering, and cancer class discovery. The mathematical formulation for NMF appears as a nonconvex optimization problem, and various types of algorithms have been devised to solve the problem. The alternating nonnegative least squares (ANLS) framework is a block coordinate descent approach for solving NMF, which was recently shown to be theoretically sound and empirically efficient. In this paper, we present a novel algorithm for NMF based on the ANLS framework. Our new algorithm builds upon the block principal pivoting method for the nonnegativity constrained least squares problem that overcomes some limitations of active set methods. We introduce ideas to efficiently extend the block principal pivoting method within the context of NMF computation. Our algorithm inherits the convergence theory of the ANLS framework and can easily be extended to other constrained NMF formulations. Comparisons of algorithms using datasets that are from real life applications as well as those artificially generated show that the proposed new algorithm outperforms existing ones in computational speed. 1
Distributed nonnegative matrix factorization for webscale dyadic data analysis on mapreduce
 In WWW ’10: Proceedings of the 19th international conference on World wide web
, 2010
"... The Web abounds with dyadic data that keeps increasing by every single second. Previous work has repeatedly shown the usefulness of extracting the interaction structure inside dyadic data [21, 9, 8]. A commonly used tool in extracting the underlying structure is the matrix factorization, whose fame ..."
Abstract

Cited by 39 (1 self)
 Add to MetaCart
(Show Context)
The Web abounds with dyadic data that keeps increasing by every single second. Previous work has repeatedly shown the usefulness of extracting the interaction structure inside dyadic data [21, 9, 8]. A commonly used tool in extracting the underlying structure is the matrix factorization, whose fame was further boosted in the Netflix challenge [26]. When we were trying to replicate the same success on realworld Web dyadic data, we were seriously challenged by the scalability of available tools. We therefore in this paper report our efforts on scaling up the nonnegative matrix factorization (NMF) technique. We show that by carefully partitioning the data and arranging the computations to maximize data locality and parallelism, factorizing a tens of millions by hundreds of millions matrix with billions of nonzero cells can be accomplished within tens of hours. This result effectively assures practitioners of the scalability of NMF on Webscale dyadic data.
Fast nonnegative matrix factorization: An activesetlike method and comparisons
 SIAM Journal on Scientific Computing
, 2011
"... Abstract. Nonnegative matrix factorization (NMF) is a dimension reduction method that has been widelyused fornumerousapplications including text mining, computer vision, pattern discovery, and bioinformatics. A mathematical formulation for NMF appears as a nonconvex optimization problem, and variou ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
(Show Context)
Abstract. Nonnegative matrix factorization (NMF) is a dimension reduction method that has been widelyused fornumerousapplications including text mining, computer vision, pattern discovery, and bioinformatics. A mathematical formulation for NMF appears as a nonconvex optimization problem, and various types of algorithms have been devised to solve the problem. The alternating nonnegative leastsquares (ANLS)frameworkisablock coordinate descent approach forsolving NMF, which was recently shown to be theoretically sound and empiricallyefficient. In this paper, we present a novel algorithm for NMF based on the ANLS framework. Our new algorithm builds upon the block principal pivoting method for the nonnegativityconstrained least squares problem that overcomes a limitation of the active set method. We introduce ideas that efficiently extend the block principal pivoting method within the context of NMF computation. Our algorithm inherits the convergence property of the ANLS framework and can easily be extended to other constrained NMF formulations. Extensive computational comparisons using data sets that are from real life applications as well as those artificially generated show that the proposed algorithm provides stateoftheart performance in terms of computational speed.
Bayesian nonnegative matrix factorization
 in Independent Component Analysis and Signal Separation, International Conference on
, 2009
"... Abstract. We present a Bayesian treatment of nonnegative matrix factorization (NMF), based on a normal likelihood and exponential priors, and derive an efficient Gibbs sampler to approximate the posterior density of the NMF factors. On a chemical brain imaging data set, we show that this improves ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We present a Bayesian treatment of nonnegative matrix factorization (NMF), based on a normal likelihood and exponential priors, and derive an efficient Gibbs sampler to approximate the posterior density of the NMF factors. On a chemical brain imaging data set, we show that this improves interpretability by providing uncertainty estimates. We discuss how the Gibbs sampler can be used for model order selection by estimating the marginal likelihood, and compare with the Bayesian information criterion. For computing the maximum a posteriori estimate we present an iterated conditional modes algorithm that rivals existing stateoftheart NMF algorithms on an image feature extraction problem. 1
Fast Coordinate Descent Methods with Variable Selection for Nonnegative Matrix Factorization
, 2011
"... Nonnegative Matrix Factorization (NMF) is an effective dimension reduction method for nonnegative dyadic data, and has proven to be useful in many areas, such as text mining, bioinformatics and image processing. NMF is usually formulated as a constrained nonconvex optimization problem, and many al ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
Nonnegative Matrix Factorization (NMF) is an effective dimension reduction method for nonnegative dyadic data, and has proven to be useful in many areas, such as text mining, bioinformatics and image processing. NMF is usually formulated as a constrained nonconvex optimization problem, and many algorithms have been developed for solving it. Recently, a coordinate descent method, called FastHals [3], has been proposed to solve least squares NMF and is regarded as one of the stateoftheart techniques for the problem. In this paper, we first show that FastHals has an inefficiency in that it uses a cyclic coordinate descent scheme and thus, performs unneeded descent steps on unimportant variables. We then present a variable selection scheme that uses the gradient of the objective function to arrive at a new coordinate descent method. Our new method is considerably faster in practice and we show that it has theoretical convergence guarantees. Moreover when the solution is sparse, as is often the case in real applications, our new method benefits by selecting important variables to update more often, thus resulting in higher speed. As an example, on a text dataset RCV1, our method is 7 times faster than FastHals, and more than 15 times faster when the sparsity is increased by adding an L1 penalty. We also develop new coordinate descent methods when error in NMF is measured by KLdivergence by applying the Newton method to solve the onevariable subproblems. Experiments indicate that our algorithm for minimizing the KLdivergence is faster than the Lee & Seung multiplicative rule by a factor of 10 on the CBCL image dataset.
Nonnegativity Constraints in Numerical Analysis
"... A survey of the development of algorithms for enforcing nonnegativity constraints in scientific computation is given. Special emphasis is placed on such constraints in least squares computations in numerical linear algebra and in nonlinear optimization. Techniques involving nonnegative lowrank matr ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
A survey of the development of algorithms for enforcing nonnegativity constraints in scientific computation is given. Special emphasis is placed on such constraints in least squares computations in numerical linear algebra and in nonlinear optimization. Techniques involving nonnegative lowrank matrix and tensor factorizations are also emphasized. Details are provided for some important classical and modern applications in science and engineering. For completeness, this report also includes an effort toward a literature survey of the various algorithms and applications of nonnegativity constraints in numerical analysis. Key Words: nonnegativity constraints, nonnegative least squares, matrix and tensor factorizations, image processing, optimization.
Nonnegative factorization and the maximum edge biclique problem
, 2008
"... Nonnegative Matrix Factorization (NMF) is a data analysis technique which allows compression and interpretation of nonnegative data. NMF became widely studied after the publication of the seminal paper by Lee and Seung (Learning the Parts of Objects by Nonnegative Matrix Factorization, Nature, 1999, ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
(Show Context)
Nonnegative Matrix Factorization (NMF) is a data analysis technique which allows compression and interpretation of nonnegative data. NMF became widely studied after the publication of the seminal paper by Lee and Seung (Learning the Parts of Objects by Nonnegative Matrix Factorization, Nature, 1999, vol. 401, pp. 788–791), which introduced an algorithm based on Multiplicative Updates (MU). More recently, another class of methods called Hierarchical Alternating Least Squares (HALS) was introduced that seems to be much more efficient in practice. In this paper, we consider the problem of approximating a not necessarily nonnegative matrix with the product of two nonnegative matrices, which we refer to as Nonnegative Factorization (NF); this is the subproblem that HALS methods implicitly try to solve at each iteration. We prove that NF is NPhard for any fixed factorization rank, using a reduction to the maximum edge biclique problem. We also generalize the multiplicative updates to NF, which allows us to shed some light on the differences between the MU and HALS algorithms for NMF and give an explanation for the better performance of HALS. Finally, we link stationary points of NF with feasible solutions of the biclique problem to obtain a new type of biclique finding algorithm (based on MU) whose iterations have an algorithmic complexity proportional to the number of edges in the graph, and show that it performs better than comparable existing methods.
Rank Minimization via Online Learning
"... Minimum rank problems arise frequently in machine learning applications and are notoriously difficult to solve due to the nonconvex nature of the rank objective. In this paper, we present the first online learning approach for the problem of rank minimization of matrices over polyhedral sets. In pa ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
(Show Context)
Minimum rank problems arise frequently in machine learning applications and are notoriously difficult to solve due to the nonconvex nature of the rank objective. In this paper, we present the first online learning approach for the problem of rank minimization of matrices over polyhedral sets. In particular, we present two online learning algorithms for rank minimization our first algorithm is a multiplicative update method based on a generalized experts framework, while our second algorithm is a novel application of the online convex programming framework (Zinkevich, 2003). In the latter, we flip the role of the decision maker by making the decision maker search over the constraint space instead of feasible points, as is usually the case in online convex programming. A salient feature of our online learning approach is that it allows us to give provable approximation guarantees for the rank minimization problem over polyhedral sets. We demonstrate the effectiveness of our methods on synthetic examples, and on the reallife application of lowrank kernel learning. 1.