Results 1 - 10
of
38
Joint Bayesian Endmember Extraction and Linear Unmixing for Hyperspectral Imagery
"... Abstract—This paper studies a fully Bayesian algorithm for endmember extraction and abundance estimation for hyperspectral imagery. Each pixel of the hyperspectral image is decomposed as a linear combination of pure endmember spectra following the linear mixing model. The estimation of the unknown e ..."
Abstract
-
Cited by 67 (29 self)
- Add to MetaCart
(Show Context)
Abstract—This paper studies a fully Bayesian algorithm for endmember extraction and abundance estimation for hyperspectral imagery. Each pixel of the hyperspectral image is decomposed as a linear combination of pure endmember spectra following the linear mixing model. The estimation of the unknown endmember spectra is conducted in a unified manner by generating the posterior distribution of abundances and endmember parameters under a hierarchical Bayesian model. This model assumes conjugate prior distributions for these parameters, accounts for nonnegativity and fulladditivity constraints, and exploits the fact that the endmember proportions lie on a lower dimensional simplex. A Gibbs sampler is proposed to overcome the complexity of evaluating the resulting posterior distribution. This sampler generates samples distributed according to the posterior distribution and estimates the unknown parameters using these generated samples. The accuracy of the joint Bayesian estimator is illustrated by simulations conducted on synthetic and real AVIRIS images. Index Terms—Bayesian inference, endmember extraction, hyperspectral imagery, linear spectral unmixing, MCMC methods. I.
SVD based initialization: A head start for nonnegative matrix factorization
- PATTERN RECOGNITION
, 2007
"... ..."
Bayesian non-negative matrix factorization
- in Independent Component Analysis and Signal Separation, International Conference on
, 2009
"... Abstract. We present a Bayesian treatment of non-negative matrix fac-torization (NMF), based on a normal likelihood and exponential priors, and derive an efficient Gibbs sampler to approximate the posterior den-sity of the NMF factors. On a chemical brain imaging data set, we show that this improves ..."
Abstract
-
Cited by 28 (1 self)
- Add to MetaCart
(Show Context)
Abstract. We present a Bayesian treatment of non-negative matrix fac-torization (NMF), based on a normal likelihood and exponential priors, and derive an efficient Gibbs sampler to approximate the posterior den-sity of the NMF factors. On a chemical brain imaging data set, we show that this improves interpretability by providing uncertainty estimates. We discuss how the Gibbs sampler can be used for model order selection by estimating the marginal likelihood, and compare with the Bayesian information criterion. For computing the maximum a posteriori estimate we present an iterated conditional modes algorithm that rivals existing state-of-the-art NMF algorithms on an image feature extraction problem. 1
Nonnegative Matrix Factorization with Constrained Second Order Optimization
, 2007
"... Nonnegative Matrix Factorization (NMF) solves the following problem: find nonnegative matrices A ∈ R M×R X ∈ R R×T + such that Y ∼ = AX, given only Y ∈ R M×T and the assigned index R. This method has found a wide spectrum of applications in signal and image processing, such as blind source separati ..."
Abstract
-
Cited by 25 (8 self)
- Add to MetaCart
Nonnegative Matrix Factorization (NMF) solves the following problem: find nonnegative matrices A ∈ R M×R X ∈ R R×T + such that Y ∼ = AX, given only Y ∈ R M×T and the assigned index R. This method has found a wide spectrum of applications in signal and image processing, such as blind source separation, spectra recovering, pattern recognition, segmentation or clustering. Such a factorization is usually performed with an alternating gradient descent technique that is applied to the squared Euclidean distance or Kullback-Leibler divergence. This approach has been used in the widely known Lee-Seung NMF algorithms that belong to a class of multiplicative iterative algorithms. It is well-known that these algorithms, in spite of their low complexity, are slowly-convergent, give only a positive solution (not nonnegative), and can easily fall in to local minima of a non-convex cost function. In this paper, we propose to take advantage of the second order terms of a cost function to overcome the disadvantages of gradient (multiplicative) algorithms. First, a projected quasi-Newton method is presented, where a regularized Hessian with the Levenberg-Marquardt approach is inverted with the Q-less QR decomposition. Since the matrices A and/or X are usually sparse, a more sophisticated hybrid approach based on the Gradient Projection Conjugate Gradient (GPCG) algorithm, which was invented by More and Toraldo, is adapted for NMF. The Gradient Projection (GP) method is exploited to find zero-value components (active), and then the Newton steps are taken only to compute positive components (inactive) with the Conjugate Gradient (CG) method. As a cost function, we used the α-divergence that unifies many well-known cost functions. We applied our new NMF method to a Blind Source Separation (BSS) problem with mixed signals and images. The results demonstrate the high robustness of our method.
Enhancing hyperspectral image unmixing with spatial correlations
, 2011
"... This paper describes a new algorithm for hyperspectral image unmixing. Most unmixing algorithms proposed in the literature do not take into account the possible spatial correlations between the pixels. In this paper, a Bayesian model is introduced to exploit these correlations. The image to be unmi ..."
Abstract
-
Cited by 23 (13 self)
- Add to MetaCart
(Show Context)
This paper describes a new algorithm for hyperspectral image unmixing. Most unmixing algorithms proposed in the literature do not take into account the possible spatial correlations between the pixels. In this paper, a Bayesian model is introduced to exploit these correlations. The image to be unmixed is assumed to be partitioned into regions (or classes) where the statistical properties of the abundance coefficients are homogeneous. A Markov random field, is then proposed to model the spatial dependencies between the pixels within any class. Conditionally upon a given class, each pixel is modeled by using the classical linear mixing model with additive white Gaussian noise. For this model, the posterior distributions of the unknown parameters and hyperparameters allow the parameters of interest to be inferred. These parameters include the abundances for each pixel, the means and variances of the abundances for each class, as well as a classification map indicating the classes of all pixels in the image. To overcome the complexity of the posterior distribution, we consider a Markov chain Monte Carlo method that generates samples asymptotically distributed according to the posterior. The generated samples are then used for parameter and hyperparameter estimation. The accuracy of the proposed algorithms is illustrated on synthetic and real data.
MULTILAYER NONNEGATIVE MATRIX FACTORIZATION USING PROJECTED GRADIENT APPROACHES
, 2007
"... The most popular algorithms for Nonnegative Matrix Factorization (NMF) belong to a class of multiplicative Lee-Seung algorithms which have usually relative low complexity but are characterized by slow-convergence and the risk of getting stuck to in local minima. In this paper, we present and compare ..."
Abstract
-
Cited by 14 (5 self)
- Add to MetaCart
(Show Context)
The most popular algorithms for Nonnegative Matrix Factorization (NMF) belong to a class of multiplicative Lee-Seung algorithms which have usually relative low complexity but are characterized by slow-convergence and the risk of getting stuck to in local minima. In this paper, we present and compare the performance of additive algorithms based on three different variations of a projected gradient approach. Additionally, we discuss a novel multilayer approach to NMF algorithms combined with multistart initializations procedure, which in general, considerably improves the performance of all the NMF algorithms. We demonstrate that this approach (the multilayer system with projected gradient algorithms) can usually give much better performance than standard multiplicative algorithms, especially, if data are ill-conditioned, badly-scaled, and/or a number of observations is only slightly greater than a number of nonnegative hidden components. Our new implementations of NMF are demonstrated with the simulations performed for Blind Source Separation (BSS) data.
Reconstruction of reflectance spectra using robust nonnegative matrix factorization
- IEEE Transactions on Signal Processing
, 2006
"... Abstract—In this correspondence, we present a robust statistics-based nonnegative matrix factorization (RNMF) approach to recover the mea-surements in reflectance spectroscopy. The proposed algorithm is based on the minimization of a robust cost function and yields two equations up-dated alternative ..."
Abstract
-
Cited by 11 (2 self)
- Add to MetaCart
(Show Context)
Abstract—In this correspondence, we present a robust statistics-based nonnegative matrix factorization (RNMF) approach to recover the mea-surements in reflectance spectroscopy. The proposed algorithm is based on the minimization of a robust cost function and yields two equations up-dated alternatively. Unlike other linear representations, such as principal component analysis, the RNMF technique is resistant to outliers and gen-erates nonnegative-basis functions, which balance the logical attractiveness of measurement functions against their physical feasibility. Experimental results on a spectral library of reflectance spectra are presented to illustrate the much improved performance of the RNMF approach. Index Terms—Nonnegative matrix factorization, reflectance spectra, ro-bust statistics. I.
Nonnegative matrix factorization with temporal smoothness and/or spatial decorrelation constraints
- Laboratory for Advanced Brain Signal Processing, RIKEN, Tech. Rep
, 2005
"... Approximate nonnegative matrix factorization (NMF) is an emerging technique with a wide spectrum of potential applications in biomedical and neurophysiological data analysis. Currently, the most popular algorithms for NMF are those proposed by Lee and Seung. However, most of the existing NMF algorit ..."
Abstract
-
Cited by 10 (2 self)
- Add to MetaCart
(Show Context)
Approximate nonnegative matrix factorization (NMF) is an emerging technique with a wide spectrum of potential applications in biomedical and neurophysiological data analysis. Currently, the most popular algorithms for NMF are those proposed by Lee and Seung. However, most of the existing NMF algorithms do not provide uniqueness of the solution and the factorized components are often difficult to interpret. The key open issue is to find such nonnegative components and corresponding basis vectors that have clear physical or physiological interpretations. Since temporally smooth structures are important for hidden components in many applications, here we propose two novel constraints and derive new multiplicative learning rules that allow us to estimate such components that have physiological meanings. Specifically, we present a novel and promising application of proposed NMF algorithm to early detection of Alzheimer disease using EEG recordings. 1
Nonnegative Matrix Factorization with Quadratic Programming
, 2006
"... Nonnegative Matrix Factorization (NMF) solves the following problem: find such nonnegative matrices A ∈ R I×J + and X ∈ R J×K + that Y ∼ = AX, given only Y ∈ R I×K and the assigned index J (K>> I ≥ J). Basically, the factorization is achieved by alternating minimization of a given cost functi ..."
Abstract
-
Cited by 9 (2 self)
- Add to MetaCart
Nonnegative Matrix Factorization (NMF) solves the following problem: find such nonnegative matrices A ∈ R I×J + and X ∈ R J×K + that Y ∼ = AX, given only Y ∈ R I×K and the assigned index J (K>> I ≥ J). Basically, the factorization is achieved by alternating minimization of a given cost function subject to nonnegativity constraints. In the paper, we propose to use Quadratic Programming (QP) to solve the minimization problems. The Tikhonov regularized squared Euclidean cost function is extended with a logarithmic barrier function (which satisfies nonnegativity constraints), and then using second-order Taylor expansion, a QP problem is formulated. This problem is solved with some trust-region subproblem algorithm. The numerical tests are performed on the blind source separation problems.