• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

The why and how of nonnegative matrix factorization, (2014)

by N Gillis
Add To MetaCart

Tools

Sorted by:
Results 1 - 7 of 7

Blind Separation of Quasi-Stationary Sources: Exploiting Convex Geometry in Covariance Domain

by Xiao Fu, Wing-kin Ma, Kejun Huang, Nicholas D. Sidiropoulos , 2015
"... This paper revisits blind source separation of instantaneously mixed quasi-stationary sources (BSS-QSS), motivated by the observation that in certain applications (e.g., speech) there exist time frames during which only one source is active, or locally dominant. Combined with nonnegativity of sourc ..."
Abstract - Cited by 3 (3 self) - Add to MetaCart
This paper revisits blind source separation of instantaneously mixed quasi-stationary sources (BSS-QSS), motivated by the observation that in certain applications (e.g., speech) there exist time frames during which only one source is active, or locally dominant. Combined with nonnegativity of source powers, this endows the problem with a nice convex geometry that enables elegant and efficient BSS solutions. Local dominance is tantamount to the so-called pure pixel/separability assumption in hyperspectral unmixing/nonnegative matrix factorization, respectively. Building on this link, a very simple algorithm called successive projection algorithm (SPA) is considered for estimating the mixing system in closed form. To complement SPA in the specific BSS-QSS context, an algebraic preprocessing procedure is proposed to suppress short-term source cross-correlation interference. The proposed procedure is simple, effective, and supported by theoretical analysis. Solutions based on volume minimization (VolMin) are also considered. By theoretical analysis, it is shown that VolMin guarantees perfect mixing system identifiability under an assumption more relaxed than (exact) local dominance—which means wider applicability in practice. Exploiting the specific structure of BSS-QSS, a fast VolMin algorithm is proposed for the overdetermined case. Careful simulations using real speech sources showcase the simplicity, efficiency, and accuracy of the proposed algorithms.
(Show Context)

Citation Context

...on-negative BSS (nBSS) for image separation, non-negative matrix factorization (NMF) and text mining, the local dominance assumption and its exploitation have also received significant attention [20]–=-=[22]-=-. In these concurrent developments, local dominance is identical to the pure pixel assumption in HU [19] and separability [23]/sufficient spread [24] conditions in NMF. We should however distinguish h...

Nonnegative matrix factorization under heavy noise.

by Chiranjib Bhattacharyya , Navin Goyal , Ravindran Kannan , Jagdeep Pani - In Proceedings of the 33nd International Conference on Machine Learning, , 2016
"... Abstract The Noisy Non-negative Matrix factorization (NMF) is: given a data matrix A (d × n), find non-negative matrices B, C (d × k, k × n respy.) so that A = BC + N , where N is a noise matrix. Existing polynomial time algorithms with proven error guarantees require each column N ·,j to have l 1 ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
Abstract The Noisy Non-negative Matrix factorization (NMF) is: given a data matrix A (d × n), find non-negative matrices B, C (d × k, k × n respy.) so that A = BC + N , where N is a noise matrix. Existing polynomial time algorithms with proven error guarantees require each column N ·,j to have l 1 norm much smaller than ||(BC) ·,j || 1 , which could be very restrictive. In important applications of NMF such as Topic Modeling as well as theoretical noise models (eg. Gaussian with high σ), almost every column of N ·j violates this condition. We introduce the heavy noise model which only requires the average noise over large subsets of columns to be small. We initiate a study of Noisy NMF under the heavy noise model. We show that our noise model subsumes noise models of theoretical and practical interest (for eg. Gaussian noise of maximum possible σ). We then devise an algorithm TSVDNMF which under certain assumptions on B, C, solves the problem under heavy noise. Our error guarantees match those of previous algorithms. Our running time of O((n + d) 2 k) is substantially better than the O(n 3 d) for the previous best. Our assumption on B is weaker than the "Separability" assumption made by all previous results. We provide empirical justification for our assumptions on C. We also provide the first proof of identifiability (uniqueness of B) for noisy NMF which is not based on separability and does not use hard to check geometric conditions. Our algorithm outperforms earlier polynomial time algorithms both in time and error, particularly in the presence of high noise.
(Show Context)

Citation Context

...e of high noise. Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s). 1. Introduction Let A be a d × n matrix (where each column, A·,j , is a d dimensional data-point) with non-negative real entries. Exact NMF is the problem of factoring A into the product BC of two non-negative matrices, with k columns in B. k is generally small. So, NMF would find the small number of “basis vectors” (columns of B) with each data point a non-negative combination of them. This has led to the applicability of NMF (Gillis, 2014). There has been recent interest in developing polynomial time bounded algorithms with proven error bounds under specialized assumptions on the data (Arora et al., 2012; Gillis & Luce, 2014; Recht et al., 2012; Rong & Zou, 2015). All such algorithms require separability assumption, first introduced in (Donoho & Stodden, 2003). An NMF BC is separable if after a permutation of rows of A and B, the top k rows of B form a non-singular diagonal matrix D0. Using separability (Donoho & Stodden, 2003) showed thatB is essentially identifiable (i.e., unique) given A. (Arora et al., 2012) observed that i...

Intersecting Faces: Non-negative Matrix Factorization With New Guarantees Rong Ge

by Rongge@microsoft Com , James Zou
"... Abstract Non-negative matrix factorization (NMF) is a natural model of admixture and is widely used in science and engineering. A plethora of algorithms have been developed to tackle NMF, but due to the non-convex nature of the problem, there is little guarantee on how well these methods work. Rece ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
Abstract Non-negative matrix factorization (NMF) is a natural model of admixture and is widely used in science and engineering. A plethora of algorithms have been developed to tackle NMF, but due to the non-convex nature of the problem, there is little guarantee on how well these methods work. Recently a surge of research have focused on a very restricted class of NMFs, called separable NMF, where provably correct algorithms have been developed. In this paper, we propose the notion of subset-separable NMF, which substantially generalizes the property of separability. We show that subset-separability is a natural necessary condition for the factorization to be unique or to have minimum volume. We developed the Face-Intersect algorithm which provably and efficiently solves subset-separable NMF under natural conditions, and we prove that our algorithm is robust to small noise. We explored the performance of Face-Intersect on simulations and discuss settings where it empirically outperformed the state-of-art methods. Our work is a step towards finding provably correct algorithms that solve large classes of NMF problems.

Successive Nonnegative Projection Algorithm for Robust Nonnegative Blind Source Separation

by Nicolas Gillis
"... ar ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...he index j whose corresponding column of the original matrix M̃ maximizes f is selected. In case of another tie, one of these columns is picked randomly. Gram-Schmidt with column pivoting; see, e.g., =-=[21, 27, 14, 17]-=- and the references therein. Although SPA has many advantages (in particular, it is very fast and rather effective in practice), a drawback is that it requires the matrix W to have rank r. In fact, if...

Sampling Versus Unambiguous Nondeterminism in Communication Complexity

by Thomas Watson , 2014
"... In this note, we investigate the relationship between the following two communication com-plexity measures for a 2-party function f: X × Y → {0, 1}: on one hand, sampling a uniformly random 1-entry of the matrix of f such that Alice learns the row index and Bob learns the column index, and on the ot ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
In this note, we investigate the relationship between the following two communication com-plexity measures for a 2-party function f: X × Y → {0, 1}: on one hand, sampling a uniformly random 1-entry of the matrix of f such that Alice learns the row index and Bob learns the column index, and on the other hand, unambiguous nondeterminism, which corresponds to par-titioning the 1’s of the matrix into combinatorial rectangles. The former complexity measure equals the log of the nonnegative rank of the matrix, and the latter equals the log of the binary rank of the matrix (which is always at least the nonnegative rank). Thus we consider the rela-tionship between these two ranks of 0-1 matrices. We prove that if the nonnegative rank is at most 3 then the two ranks are equal. We also show a separation by exhibiting a matrix with nonnegative rank 4 and binary rank 5, as well as a family of matrices for which the binary rank is 4/3 times the nonnegative rank. 1

Heuristics for Exact Nonnegative Matrix Factorization

by Arnaud Vandaele, et al. , 2014
"... ..."
Abstract - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...ly all NMF algorithms are iterative: at each step, they aim to improve the current solution. In practice, these algorithms are usually initialized randomly, or with some ad hoc strategies; see, e.g., =-=[8, 19]-=- and the references therein. Comparatively, much less attention has been given in the literature to the development of heuristic algorithms aimed at finding better local minima of the NMF approximatio...

A rank estimation criterion using an NMF algorithm under an inner dimension condition

by Nam H. Lee, I-jeng Wang, Runze Tang, Michael Rosen, Carey E. Priebe , 2014
"... We introduce a rank selection criterion for non-negative factorization algorithms, for the cases where the rank of the matrix coincides with the inner dimension of the matrix. The criteria is motivated by noting that provided that a unique factorization exists, the factorization is a solution to a f ..."
Abstract - Add to MetaCart
We introduce a rank selection criterion for non-negative factorization algorithms, for the cases where the rank of the matrix coincides with the inner dimension of the matrix. The criteria is motivated by noting that provided that a unique factorization exists, the factorization is a solution to a fixed point iteration formula that can be obtained by rewriting non-negative factorization together with singular value decomposition. We characterize the asymptotic error rate for our fixed point formula when the non-negative matrix is observed with noise generated according to the so-called random dot product model for graphs. 1
(Show Context)

Citation Context

...call the integer r the inner dimension of the decomposition WH. In general, a non-negative factorizable matrix can be transformed to a form stated in (1) by way of the so-called “pull-back” map (c.f. =-=[1]-=-). Given such X, generally speaking, finding the pair (W,H) is known to be an NP-hard problem (c.f. [2] and [3]), and even validating the uniqueness of the factorization remains a challenging task (c....

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University