Results 1 
9 of
9
Phase diagram and approximate message passing for blind calibration and dictionary learning. arXiv:1301.5898
, 2013
"... Abstract—We consider dictionary learning and blind calibration for signals and matrices created from a random ensemble. We study the meansquared error in the limit of large signal dimension using the replica method and unveil the appearance of phase transitions delimiting impossible, possiblebut ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
Abstract—We consider dictionary learning and blind calibration for signals and matrices created from a random ensemble. We study the meansquared error in the limit of large signal dimension using the replica method and unveil the appearance of phase transitions delimiting impossible, possiblebuthard and possible inference regions. We also introduce an approximate message passing algorithm that asymptotically matches the theoretical performance, and show through numerical tests that it performs very well, for the calibration problem, for tractable system sizes. I.
Hyperspectral image unmixing via bilinear generalized approximate message passing
 Proc. SPIE
, 2013
"... In hyperspectral unmixing, the objective is to decompose an electromagnetic spectral dataset measured over M spectral bands and T pixels, into N constituent material spectra (or “endmembers”) with corresponding spatial abundances. In this paper, we propose a novel approach to hyperspectral unmixing ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
In hyperspectral unmixing, the objective is to decompose an electromagnetic spectral dataset measured over M spectral bands and T pixels, into N constituent material spectra (or “endmembers”) with corresponding spatial abundances. In this paper, we propose a novel approach to hyperspectral unmixing (i.e., joint estimation of endmembers and abundances) based on loopy belief propagation. In particular, we employ the bilinear generalized approximate message passing algorithm (BiGAMP), a recently proposed beliefpropagationbased approach to matrix factorization, in a “turbo ” framework that enables the exploitation of spectral coherence in the endmembers, as well as spatial coherence in the abundances. In conjunction, we propose an expectationmaximization (EM) technique that can be used to automatically tune the prior statistics assumed by turbo BiGAMP. Numerical experiments on synthetic and realworld data confirm the stateoftheart performance of our approach.
Recovery of Low Rank Matrices Under Affine Constraints via a Smoothed Rank Function
, 2014
"... In this paper, the problem of matrix rank minimization under affine constraints is addressed. The stateoftheart algorithms can recover matrices with a rank much less than what is sufficient for the uniqueness of the solution of this optimization problem. We propose an algorithm based on a smooth a ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
In this paper, the problem of matrix rank minimization under affine constraints is addressed. The stateoftheart algorithms can recover matrices with a rank much less than what is sufficient for the uniqueness of the solution of this optimization problem. We propose an algorithm based on a smooth approximation of the rank function, which practically improves recovery limits on the rank of the solution. This approximation leads to a nonconvex program; thus, to avoid getting trapped in local solutions, we use the following scheme. Initially, a rough approximation of the rank function subject to the affine constraints is optimized. As the algorithm proceeds, until reaching the desired accuracy, finer approximations of the rank are successively optimized while the solver is initialized with the solution of the previous approximation. On the theoretical side, benefiting from the spherical section property, we will show that the sequence of the solutions of the approximating programs converges to the minimum rank solution. On the experimental side, it will be shown that the proposed algorithm, termed SRF standing for Smoothed Rank Function, can recover matrices which are unique solutions of the rank minimization problem and yet not recoverable by nuclear norm minimization. Furthermore, it will be demonstrated that, in completing partially observed matrices, the accuracy of SRF is considerably and consistently better than some famous algorithms when the number of revealed entries is close to the minimum number of parameters that uniquely represent a lowrank matrix.
A PseudoBayesian Algorithm for Robust PCA
"... Abstract Commonly used in many applications, robust PCA represents an algorithmic attempt to reduce the sensitivity of classical PCA to outliers. The basic idea is to learn a decomposition of some data matrix of interest into low rank and sparse components, the latter representing unwanted outliers ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract Commonly used in many applications, robust PCA represents an algorithmic attempt to reduce the sensitivity of classical PCA to outliers. The basic idea is to learn a decomposition of some data matrix of interest into low rank and sparse components, the latter representing unwanted outliers. Although the resulting problem is typically NPhard, convex relaxations provide a computationallyexpedient alternative with theoretical support. However, in practical regimes performance guarantees break down and a variety of nonconvex alternatives, including Bayesianinspired models, have been proposed to boost estimation quality. Unfortunately though, without additional a priori knowledge none of these methods can significantly expand the critical operational range such that exact principal subspace recovery is possible. Into this mix we propose a novel pseudoBayesian algorithm that explicitly compensates for design weaknesses in many existing nonconvex approaches leading to stateoftheart performance with a sound analytical foundation.
Approximate message passing algorithms for . . .
, 2014
"... Recent developments in compressive sensing (CS) combined with increasing demands for effective highdimensional inference techniques across a variety of disciplines have motivated extensive research into algorithms exploiting various notions of parsimony, including sparsity and lowrank constraints. ..."
Abstract
 Add to MetaCart
Recent developments in compressive sensing (CS) combined with increasing demands for effective highdimensional inference techniques across a variety of disciplines have motivated extensive research into algorithms exploiting various notions of parsimony, including sparsity and lowrank constraints. In this dissertation, we extend the generalized approximate message passing (GAMP) approach, originally proposed for highdimensional generalizedlinear regression in the context of CS, to handle several classes of bilinear inference problems. First, we consider a general form of noisy CS where there is uncertainty in the measurement matrix as well as in the measurements. Matrix uncertainty is motivated by practical cases in which there are imperfections or unknown calibration parameters in the signal acquisition hardware. While previous work has focused on analyzing and extending classical CS algorithms like the LASSO and Dantzig selector for this problem setting, we propose a new algorithm called Matrix Uncertain GAMP (MUGAMP) whose goal is minimization of meansquared error of the signal estimates in the presence of these uncertainties, with
From Denoising to Compressed Sensing
, 2014
"... A denoising algorithm seeks to remove perturbations or errors from a signal. The last three decades have seen extensive research devoted to this arena, and as a result, today’s denoisers are highly optimized algorithms that effectively remove large amounts of additive white Gaussian noise. A compres ..."
Abstract
 Add to MetaCart
A denoising algorithm seeks to remove perturbations or errors from a signal. The last three decades have seen extensive research devoted to this arena, and as a result, today’s denoisers are highly optimized algorithms that effectively remove large amounts of additive white Gaussian noise. A compressive sensing (CS) reconstruction algorithm seeks to recover a structured signal acquired using a small number of randomized measurements. Typical CS reconstruction algorithms can be cast as iteratively estimating a signal from a perturbed observation. This paper answers a natural question: How can one effectively employ a generic denoiser in a CS reconstruction algorithm? In response, in this paper, we develop a denoisingbased approximate message passing (DAMP) algorithm that is capable of highperformance reconstruction. We demonstrate that, for an appropriate choice of denoiser, DAMP offers stateoftheart CS recovery performance for natural images. We explain the exceptional performance of DAMP by analyzing some of its theoretical features. A critical insight in our approach is the use of an appropriate Onsager correction term in the DAMP iterations, which coerces the signal perturbation at each iteration to be very close to the white Gaussian noise that denoisers are typically designed to remove.
EmpiricalBayes Approaches to Recovery of Structured Sparse Signals via . . .
, 2015
"... In recent years, there have been massive increases in both the dimensionality and sample sizes of data due to everincreasing consumer demand coupled with relatively inexpensive sensing technologies. These highdimensional datasets bring challenges such as complexity, along with numerous opportuniti ..."
Abstract
 Add to MetaCart
In recent years, there have been massive increases in both the dimensionality and sample sizes of data due to everincreasing consumer demand coupled with relatively inexpensive sensing technologies. These highdimensional datasets bring challenges such as complexity, along with numerous opportunities. Though many signals of interest live in a highdimensional ambient space, they often have a much smaller inherent dimensionality which, if leveraged, lead to improved recoveries. For example, the notion of sparsity is a requisite in the compressive sensing (CS) field, which allows for accurate signal reconstruction from subNyquist sampled measurements given certain conditions. When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal’s nonzero coefficients can have a profound effect on recovery meansquared error (MSE). If this distribution is apriori known, then one could use computationally efficient approximate message passing (AMP) techniques that yield approximate minimum MSE (MMSE) estimates or critical points to the maxi