Results 11  20
of
43
Flexible HALS algorithms for sparse nonnegative matrix/tensor factorization,” Proc. of The eighteenth of a series of
 IEEE workshops on Machine Learning for Signal Processing
, 2008
"... In this paper we propose a family of new algorithms for nonnegative matrix/tensor factorization (NMF/NTF) and sparse nonnegative coding and representation that has many potential applications in computational neuroscience, multisensory, multidimensional data analysis and text mining. We have develop ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
(Show Context)
In this paper we propose a family of new algorithms for nonnegative matrix/tensor factorization (NMF/NTF) and sparse nonnegative coding and representation that has many potential applications in computational neuroscience, multisensory, multidimensional data analysis and text mining. We have developed a class of local algorithms which are extensions of Hierarchical Alternating Least Squares (HALS) algorithms proposed by us in [1]. For these purposes, we have performed simultaneous constrained minimization of a set of robust cost functions called alpha and beta divergences. Our algorithms are locally stable and work well for the NMF blind source separation (BSS) not only for the overdetermined case but also for an underdetermined (overcomplete) case (i.e., for a system which has less sensors than sources) if data are sufficiently sparse. The NMF learning rules are extended and generalized for Nth order nonnegative tensor factorization (NTF). Moreover, new algorithms can be potentially accommodated to different noise statistics by just adjusting a single parameter. Extensive experimental results confirm the validity and high performance of the developed algorithms, especially, with usage of the multilayer hierarchical approach [1]. 1.
Tensors decompositions: New concepts for brain data analysis
 Journal of Control, Measurement, and System Integration (SICE
, 2011
"... ar ..."
On the geometric interpretation of the nonnegative rank
, 2010
"... The nonnegative rank of a nonnegative matrix is the minimum number of nonnegative rankone factors needed to reconstruct it exactly. The problem of determining this rank and computing the corresponding nonnegative factors is difficult; however it has many potential applications, e.g., in data mining ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
The nonnegative rank of a nonnegative matrix is the minimum number of nonnegative rankone factors needed to reconstruct it exactly. The problem of determining this rank and computing the corresponding nonnegative factors is difficult; however it has many potential applications, e.g., in data mining, graph theory and computational geometry. In particular, it can be used to characterize the minimal size of any extended reformulation of a given combinatorial optimization program. In this paper, we introduce and study a related quantity, called the restricted nonnegative rank. We show that computing this quantity is equivalent to a problem in polyhedral combinatorics, and fully characterize its computational complexity. This in turn sheds new light on the nonnegative rank problem, and in particular allows us to provide new improved lower bounds based on its geometric interpretation. We apply these results to slack matrices and linear Euclidean distance matrices and obtain counterexamples to two conjectures of Beasly and Laffey, namely we show that the nonnegative rank of linear Euclidean distance matrices is not necessarily equal to their dimension, and that the rank of a matrix is not always greater than the nonnegative rank of its square.
Fast Nonnegative Tensor Factorization with an ActiveSetLike Method
"... Abstract We introduce an efficient algorithm for computing a lowranknonnegativeCANDECOMP/PARAFAC(NNCP)decomposition.Intextmining, signal processing, and computer vision among other areas, imposing nonnegativity constraints to the lowrank factors of matrices and tensors has been shown an effective ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
Abstract We introduce an efficient algorithm for computing a lowranknonnegativeCANDECOMP/PARAFAC(NNCP)decomposition.Intextmining, signal processing, and computer vision among other areas, imposing nonnegativity constraints to the lowrank factors of matrices and tensors has been shown an effective technique providing physically meaningful interpretation. A principled methodology for computing NNCP is alternating nonnegative least squares, in which the nonnegativityconstrained least squares (NNLS) problems are solved in each iteration. In this chapter, we propose to solve the NNLS problems using the block principal pivoting method. The block principal pivoting method overcomes some difficulties of the classical active method for the NNLS problems with a large number of variables. We introducetechniquestoacceleratetheblockprincipalpivotingmethodformultiple righthand sides, which is typical in NNCP computation. Computational experiments show the stateoftheart performance of the proposed method. 1
The why and how of nonnegative matrix factorization
 REGULARIZATION, OPTIMIZATION, KERNELS, AND SUPPORT VECTOR MACHINES. CHAPMAN & HALL/CRC
, 2014
"... ..."
(Show Context)
Document Classification Using Nonnegative Matrix Factorization and Underapproximation
 IN PROC. OF THE IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2009
"... In this study, we use nonnegative matrix factorization (NMF) and nonnegative matrix underapproximation (NMU) approaches to generate feature vectors that can be used to cluster Aviation Safety Reporting System (ASRS) documents obtained from the Distributed National ASAP Archive (DNAA). By preserving ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
In this study, we use nonnegative matrix factorization (NMF) and nonnegative matrix underapproximation (NMU) approaches to generate feature vectors that can be used to cluster Aviation Safety Reporting System (ASRS) documents obtained from the Distributed National ASAP Archive (DNAA). By preserving nonnegativity, both the NMF and NMU facilitate a sumofparts representation of the underlying term usage patterns in the ASRS document collection. Both the training and test sets of ASRS documents are parsed and then factored by both algorithms to produce a reducedrank representations of the entire document space. The resulting feature and coefficient matrix factors are used to cluster ASRS documents so that the (known) associated anomalies of training documents are directly mapped to the feature vectors. Dominant features of test documents are then used to generate anomaly relevance scores for those documents. We demonstrate that the approximate solution obtained by NMU using Lagrangrian duality can lead to a better sumofparts representation and document classification accuracy.
bioNMF: a webbased tool for nonnegative matrix factorization
, 2008
"... in biology ..."
(Show Context)
Bounded Matrix Low Rank Approximation
"... Abstract—Matrix lower rank approximations such as nonnegative matrix factorization (NMF) have been successfully used to solve many data mining tasks. In this paper, we propose a new matrix lower rank approximation called Bounded Matrix Low Rank Approximation (BMA) which imposes a lower and an upper ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Matrix lower rank approximations such as nonnegative matrix factorization (NMF) have been successfully used to solve many data mining tasks. In this paper, we propose a new matrix lower rank approximation called Bounded Matrix Low Rank Approximation (BMA) which imposes a lower and an upper bound on every element of a lower rank matrix that best approximates a given matrix with missing elements. This new approximation models many real world problems, such as recommender systems, and performs better than other methods, such as singular value decompositions (SVD) or NMF. We present an efficient algorithm to solve BMA based on coordinate descent method. BMA is different from NMF as it imposes bounds on the approximation itself rather than on each of the low rank factors. We show that our algorithm is scalable for large matrices with missing elements on multi core systems with low memory. We present substantial experimental results illustrating that the proposed method outperforms the state of the art algorithms for recommender systems such as
Multicomponent Analysis: Blind Extraction of Pure Components Mass Spectra using Sparse Component Analysis.
, 2009
"... The paper presents sparse component analysis (SCA)based blind decomposition of the mixtures of mass spectra into pure components, wherein the number of mixtures is less than number of pure components. Standard solutions of the related blind source separation (BSS) problem that are published in the ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
The paper presents sparse component analysis (SCA)based blind decomposition of the mixtures of mass spectra into pure components, wherein the number of mixtures is less than number of pure components. Standard solutions of the related blind source separation (BSS) problem that are published in the open literature require the number of mixtures to be greater than or equal to the unknown number of pure components. Specifically, we have demonstrated experimentally the capability of the SCA to blindly extract five pure components mass spectra from two mixtures only. Two approaches to SCA are tested: the first one based on 1 norm minimization implemented through linear programming and the second one implemented through multilayer hierarchical alternating least square nonnegative matrix factorization with sparseness constraints imposed on pure components spectra. In contrast to many existing blind decomposition methods no a priori information about the number of pure components is required. It is estimated from the mixtures using robust data clustering algorithm together with pure components concentration matrix. Proposed methodology can be implemented as a part of software packages used for the analysis of mass spectra and identification of chemical compounds.
Nonnegative multiple matrix factorization
 in IJCAI
, 2013
"... Nonnegative Matrix Factorization (NMF) is a traditional unsupervised machine learning technique for decomposing a matrix into a set of bases and coefficients under the nonnegative constraint. NMF with sparse constraints is also known for extracting reasonable components from noisy data. However, ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Nonnegative Matrix Factorization (NMF) is a traditional unsupervised machine learning technique for decomposing a matrix into a set of bases and coefficients under the nonnegative constraint. NMF with sparse constraints is also known for extracting reasonable components from noisy data. However, NMF tends to give undesired results in the case of highly sparse data, because the information included in the data is insufficient to decompose. Our key idea is that we can ease this problem if complementary data are available that we could integrate into the estimation of the bases and coefficients. In this paper, we propose a novel matrix factorization method called Nonnegative Multiple Matrix Factorization (NM2F), which utilizes complementary data as auxiliary matrices that share the row or column indices of the target matrix. The data sparseness is improved by decomposing the target and auxiliary matrices simultaneously, since auxiliary matrices provide information about the bases and coefficients. We formulate NM2F as a generalization of NMF, and then present a parameter estimation procedure derived from the multiplicative update rule. We examined NM2F in both synthetic and real data experiments. The effect of the auxiliary matrices appeared in the improved NM2F performance. We also confirmed that the bases that NM2F obtained from the real data were intuitive and reasonable thanks to the nonnegative constraint. 1