Results 1  10
of
77
Generalized nonnegative matrix approximations with Bregman divergences
 In: Neural Information Proc. Systems
, 2005
"... Nonnegative matrix approximation (NNMA) is a recent technique for dimensionality reduction and data analysis that yields a parts based, sparse nonnegative representation for nonnegative input data. NNMA has found a wide variety of applications, including text analysis, document clustering, face/imag ..."
Abstract

Cited by 97 (5 self)
 Add to MetaCart
(Show Context)
Nonnegative matrix approximation (NNMA) is a recent technique for dimensionality reduction and data analysis that yields a parts based, sparse nonnegative representation for nonnegative input data. NNMA has found a wide variety of applications, including text analysis, document clustering, face/image recognition, language modeling, speech processing and many others. Despite these numerous applications, the algorithmic development for computing the NNMA factors has been relatively deficient. This paper makes algorithmic progress by modeling and solving (using multiplicative updates) new generalized NNMA problems that minimize Bregman divergences between the input matrix and its lowrank approximation. The multiplicative update formulae in the pioneering work by Lee and Seung [11] arise as a special case of our algorithms. In addition, the paper shows how to use penalty functions for incorporating constraints other than nonnegativity into the problem. Further, some interesting extensions to the use of “link ” functions for modeling nonlinear relationships are also discussed. 1
C.: Multichannel nonnegative matrix factorization in convolutive mixtures for audio source separation
 IEEE Trans. Audio, Speech, Language Process
, 2010
"... We consider inference in a general datadriven objectbased model of multichannel audio data, assumed generated as a possibly underdetermined convolutive mixture of source signals. Each source is given a model inspired from nonnegative matrix factorization (NMF) with the ItakuraSaito divergence, wh ..."
Abstract

Cited by 78 (17 self)
 Add to MetaCart
(Show Context)
We consider inference in a general datadriven objectbased model of multichannel audio data, assumed generated as a possibly underdetermined convolutive mixture of source signals. Each source is given a model inspired from nonnegative matrix factorization (NMF) with the ItakuraSaito divergence, which underlies a statistical model of superimposed Gaussian components. We address estimation of the mixing and source parameters using two methods. The first one consists of maximizing the exact joint likelihood of the multichannel data using an expectationmaximization algorithm. The second method consists of maximizing the sum of individual likelihoods of all channels using a multiplicative update algorithm inspired from NMF methodology. Our decomposition algorithms were applied to stereo music and assessed in terms of blind source separation performance. Index Terms — Multichannel audio, nonnegative matrix factorization, nonnegative tensor factorization, underdetermined convolutive blind source separation. 1.
Fast Local Algorithms for Large Scale Nonnegative Matrix and Tensor Factorizations
, 2008
"... Nonnegative matrix factorization (NMF) and its extensions such as Nonnegative Tensor Factorization (NTF) have become prominent techniques for blind sources separation (BSS), analysis of image databases, data mining and other information retrieval and clustering applications. In this paper we propose ..."
Abstract

Cited by 49 (13 self)
 Add to MetaCart
Nonnegative matrix factorization (NMF) and its extensions such as Nonnegative Tensor Factorization (NTF) have become prominent techniques for blind sources separation (BSS), analysis of image databases, data mining and other information retrieval and clustering applications. In this paper we propose a family of efficient algorithms for NMF/NTF, as well as sparse nonnegative coding and representation, that has many potential applications in computational neuroscience, multisensory processing, compressed sensing and multidimensional data analysis. We have developed a class of optimized local algorithms which are referred to as Hierarchical Alternating Least Squares (HALS) algorithms. For these purposes, we have performed sequential constrained minimization on a set of squared Euclidean distances. We then extend this approach to robust cost functions using the Alpha and Beta divergences and derive flexible update rules. Our algorithms are locally stable and work well for NMFbased blind source separation (BSS) not only for the overdetermined case but also for an underdetermined (overcomplete) case (i.e., for a system which has less sensors than sources) if data are sufficiently sparse. The NMF learning rules are extended and generalized for Nth order nonnegative tensor factorization (NTF). Moreover, these algorithms can be tuned to different noise statistics by adjusting a single parameter. Extensive experimental results confirm the accuracy and computational performance of the developed algorithms, especially, with usage of multilayer hierarchical NMF approach [3].
Nonnegative matrix factorization with quasiNewton optimization
 in Proceedings of the 8th International Conference on Artificial Intelligence and Soft Computing (ICAISC
, 2006
"... Abstract. Nonnegative matrix factorization (NMF) is an emerging method with wide spectrum of potential applications in data analysis, feature extraction and blind source separation. Currently, most applications use relative simple multiplicative NMF learning algorithms which were proposed by Lee an ..."
Abstract

Cited by 38 (7 self)
 Add to MetaCart
(Show Context)
Abstract. Nonnegative matrix factorization (NMF) is an emerging method with wide spectrum of potential applications in data analysis, feature extraction and blind source separation. Currently, most applications use relative simple multiplicative NMF learning algorithms which were proposed by Lee and Seung, and are based on minimization of the KullbackLeibler divergence and Frobenius norm. Unfortunately, these algorithms are relatively slow and often need a few thousands of iterations to achieve a local minimum. In order to increase a convergence rate and to improve performance of NMF, we proposed to use a more general cost function: socalled Amari alpha divergence. Taking into account a special structure of the Hessian of this cost function, we derived a relatively simple secondorder quasiNewton method for NMF. The validity and performance of the proposed algorithm has been extensively tested for blind source separation problems, both for signals and images. The performance of the developed NMF algorithm is illustrated for separation of statistically dependent signals and images from their linear mixtures. 1
R.: Nonnegative matrix factorization based compensation of music for automatic speech recognition
 In: Proc. of Interspeech. Makuhari
, 2010
"... This paper proposes to use nonnegative matrix factorization based speech enhancement in robust automatic recognition of mixtures of speech and music. We represent magnitude spectra of noisy speech signals as the nonnegative weighted linear combination of speech and noise spectral basis vectors, th ..."
Abstract

Cited by 35 (11 self)
 Add to MetaCart
(Show Context)
This paper proposes to use nonnegative matrix factorization based speech enhancement in robust automatic recognition of mixtures of speech and music. We represent magnitude spectra of noisy speech signals as the nonnegative weighted linear combination of speech and noise spectral basis vectors, that are obtained from training corpora of speech and music. We use overcomplete dictionaries consisting of random exemplars of the training data. The method is tested on the Wall Street Journal large vocabulary speech corpus which is artificially corrupted with polyphonic music from the RWC music database. Various music styles and speechtomusic ratios are evaluated. The proposed methods are shown to produce a consistent, significant improvement on the recognition performance in the comparison with the baseline method. Audio demonstrations of the enhanced signals are available at
Nonnegative tensor factorization using alpha and beta divergencies
 IN: PROC. IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP07
, 2007
"... In this paper we propose new algorithms for 3D tensor decomposition/factorization with many potential applications, especially in multiway Blind Source Separation (BSS), multidimensional data analysis, and sparse signal/image representations. We derive and compare three classes of algorithms: Multi ..."
Abstract

Cited by 34 (13 self)
 Add to MetaCart
(Show Context)
In this paper we propose new algorithms for 3D tensor decomposition/factorization with many potential applications, especially in multiway Blind Source Separation (BSS), multidimensional data analysis, and sparse signal/image representations. We derive and compare three classes of algorithms: Multiplicative, FixedPoint Alternating Least Squares (FPALS) and Alternating InteriorPoint Gradient (AIPG) algorithms. Some of the proposed algorithms are characterized by improved robustness, efficiency and convergence rates and can be applied for various distributions of data and additive noise.
Automatic relevance determination in nonnegative matrix factorization
 in SPARS, (StMalo
, 2009
"... This paper addresses the problem of estimating the latent dimensionality in nonnegative matrix fatorization (NMF) via automatic relevance determination (ARD). Uncovering the latent dimensionality is necessary for striking the right balance between data fidelity and overfitting. We propose a Bayesian ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
(Show Context)
This paper addresses the problem of estimating the latent dimensionality in nonnegative matrix fatorization (NMF) via automatic relevance determination (ARD). Uncovering the latent dimensionality is necessary for striking the right balance between data fidelity and overfitting. We propose a Bayesian model for NMF and two algorithms known as ℓ1 and ℓ2ARD, each assuming different priors on the basis and the coefficients. The proposed algorithms leverage on the recent algorithmic advances in NMF with the βdivergence using majorizationminimization (MM) methods. We show by using auxiliary functions that the cost function decreases monotonically to a local minimum. We demonstrate the efficacy and robustness of our algorithms by performing experiments on the swimmer dataset. 1
Families of Alpha Beta and GammaDivergences: Flexible and Robust Measures of Similarities
, 2010
"... ..."
Nonnegative matrix approximation: algorithms and applications
, 2006
"... Low dimensional data representations are crucial to numerous applications in machine learning, statistics, and signal processing. Nonnegative matrix approximation (NNMA) is a method for dimensionality reduction that respects the nonnegativity of the input data while constructing a lowdimensional ap ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
Low dimensional data representations are crucial to numerous applications in machine learning, statistics, and signal processing. Nonnegative matrix approximation (NNMA) is a method for dimensionality reduction that respects the nonnegativity of the input data while constructing a lowdimensional approximation. NNMA has been used in a multitude of applications, though without commensurate theoretical development. In this report we describe generic methods for minimizing generalized divergences between the input and its low rank approximant. Some of our general methods are even extensible to arbitrary convex penalties. Our methods yield efficient multiplicative iterative schemes for solving the proposed problems. We also consider interesting extensions such as the use of penalty functions, nonlinear relationships via “link ” functions, weighted errors, and multifactor approximations. We present some experiments as an illustration of our algorithms. For completeness, the report also includes a brief literature survey of the various algorithms and the applications of NNMA. Keywords: Nonnegative matrix factorization, weighted approximation, Bregman divergence, multiplicative
Nonnegative Tensor Factorization for Continuous EEG Classification
, 2007
"... In this paper we present a method for continuous EEG classification, where we employ nonnegative tensor factorization (NTF) to determine discriminative spectral features and use the Viterbi algorithm to continuously classify multiple mental tasks. This is an extension of our previous work on the use ..."
Abstract

Cited by 22 (11 self)
 Add to MetaCart
In this paper we present a method for continuous EEG classification, where we employ nonnegative tensor factorization (NTF) to determine discriminative spectral features and use the Viterbi algorithm to continuously classify multiple mental tasks. This is an extension of our previous work on the use of nonnegative matrix factorization (NMF) for EEG classification. Numerical experiments with two data sets in BCI competition, confirm the useful behavior of the method