Results 1  10
of
40
Compressive Phase Retrieval via Generalized Approximate Message Passing
"... Abstract—In this paper, we propose a novel approach to compressive phase retrieval based on loopy belief propagation and, in particular, on the generalized approximate message passing (GAMP) algorithm. Numerical results show that the proposed PRGAMP algorithm has excellent phasetransition behavior ..."
Abstract

Cited by 30 (8 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we propose a novel approach to compressive phase retrieval based on loopy belief propagation and, in particular, on the generalized approximate message passing (GAMP) algorithm. Numerical results show that the proposed PRGAMP algorithm has excellent phasetransition behavior, noise robustness, and runtime. In particular, for successful recovery of synthetic BernoullicircularGaussian signals, PRGAMP requires ≈ 4 times the number of measurements as a phaseoracle version of GAMP and, at moderate to large SNR, the NMSE of PRGAMP is only ≈ 3 dB worse than that of phaseoracle GAMP. A comparison to the recently proposed convexrelation approach known as “CPRL ” reveals PRGAMP’s superior phase transition and ordersofmagnitude faster runtimes, especially as the problem dimensions increase. When applied to the recovery of a 65kpixel grayscale image from 32k randomly masked magnitude measurements, numerical results show a median PRGAMP runtime of only 13.4 seconds. A. Phase retrieval I.
Asymptotic analysis of complex LASSO via complex approximate message passing
 IEEE Trans. Inf. Theory
, 2011
"... Recovering a sparse signal from an undersampled set of random linear measurements is the main problem of interest in compressed sensing. In this paper, we consider the case where both the signal and the measurements are complexvalued. We study the popular reconstruction method of ℓ1regularized lea ..."
Abstract

Cited by 27 (9 self)
 Add to MetaCart
(Show Context)
Recovering a sparse signal from an undersampled set of random linear measurements is the main problem of interest in compressed sensing. In this paper, we consider the case where both the signal and the measurements are complexvalued. We study the popular reconstruction method of ℓ1regularized least squares or LASSO. While several studies have shown that the LASSO algorithm offers desirable solutions under certain conditions, the precise asymptotic performance of this algorithm in the complex setting is not yet known. In this paper, we extend the approximate message passing (AMP) algorithm to the complexvalued signals and measurements to obtain the complex approximate message passing algorithm (CAMP). We then generalize the state evolution framework recently introduced for the analysis of AMP, to the complex setting. Using the state evolution, we derive accurate formulas for the phase transition and noise sensitivity of both LASSO and CAMP. Our results are theoretically proved for the case of i.i.d. Gaussian sensing matrices. But we confirm through simulations that our results hold for larger class of random matrices. 1
Approximate message passing with consistent parameter estimation and applications to sparse learning,” arXiv:1207.3859 [cs.IT
, 2012
"... We consider the estimation of an i.i.d. vector x ∈ Rn from measurements y ∈ Rm obtained by a general cascade model consisting of a known linear transform followed by a probabilistic componentwise (possibly nonlinear) measurement channel. We present a method, called adaptive generalized approximate ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
(Show Context)
We consider the estimation of an i.i.d. vector x ∈ Rn from measurements y ∈ Rm obtained by a general cascade model consisting of a known linear transform followed by a probabilistic componentwise (possibly nonlinear) measurement channel. We present a method, called adaptive generalized approximate message passing (Adaptive GAMP), that enables joint learning of the statistics of the prior and measurement channel along with estimation of the unknown vector x. Our method can be applied to a large class of learning problems including the learning of sparse priors in compressed sensing or identification of linearnonlinear cascade models in dynamical systems and neural spiking processes. We prove that for large i.i.d. Gaussian transform matrices the asymptotic componentwise behavior of the adaptive GAMP algorithm is predicted by a simple set of scalar state evolution equations. This analysis shows that the adaptive GAMP method can yield asymptotically consistent parameter estimates, which implies that the algorithm achieves a reconstruction quality equivalent to the oracle algorithm that knows the correct parameter values. The adaptive GAMP methodology thus provides a systematic, general and computationally efficient method applicable to a large range of complex linearnonlinear models with provable guarantees. 1
Compressed sensing for energyefficient wireless telemonitoring of noninvasive fetal ECG via block sparse bayesian learning
 Biomedical Engineering, IEEE Transactions on
, 2013
"... ar ..."
(Show Context)
Dynamic Compressive Sensing of TimeVarying Signals via Approximate Message Passing
, 2013
"... In this work the dynamic compressive sensing (CS) problem of recovering sparse, correlated, timevarying signals from subNyquist, nonadaptive, linear measurements is explored from a Bayesian perspective. While there has been a handful of previously proposed Bayesian dynamic CS algorithms in the li ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
In this work the dynamic compressive sensing (CS) problem of recovering sparse, correlated, timevarying signals from subNyquist, nonadaptive, linear measurements is explored from a Bayesian perspective. While there has been a handful of previously proposed Bayesian dynamic CS algorithms in the literature, the ability to perform inference on highdimensional problems in a computationally efficient manner remains elusive. In response, we propose a probabilistic dynamic CS signal model that captures both amplitude and support correlation structure, and describe an approximate message passing algorithm that performs soft signal estimation and support detection with a computational complexity that is linear in all problem dimensions. The algorithm, DCSAMP, can perform either causal filtering or noncausal smoothing, and is capable of learning model parameters adaptively from the data through an expectationmaximization learning procedure. We provide numerical evidence that DCSAMP performs within 3 dB of oracle bounds on synthetic data under a variety of operating conditions. We further describe the result of applying DCSAMP to two real dynamic CS datasets, as well as a frequency estimation task, to bolster our claim that DCSAMP is capable of offering stateoftheart performance and speed on realworld highdimensional problems.
An empiricalBayes approach to recovering linearly constrained nonnegative sparse signals,”
, 2013
"... AbstractWe consider the recovery of an (approximately) sparse signal from noisy linear measurements, in the case that the signal is apriori known to be nonnegative and obeys certain linear equality constraints. For this, we propose a novel empiricalBayes approach that combines the Generalized App ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
(Show Context)
AbstractWe consider the recovery of an (approximately) sparse signal from noisy linear measurements, in the case that the signal is apriori known to be nonnegative and obeys certain linear equality constraints. For this, we propose a novel empiricalBayes approach that combines the Generalized Approximate Message Passing (GAMP) algorithm with the expectation maximization (EM) algorithm. To enforce both sparsity and nonnegativity, we employ an i.i.d Bernoulli nonnegative Gaussian mixture (NNGM) prior and perform approximate minimum meansquared error (MMSE) recovery of the signal using sumproduct GAMP. To learn the NNGM parameters, we use the EM algorithm with a suitable initialization. Meanwhile, the linear equality constraints are enforced by augmenting GAMP's linear observation model with noiseless pseudomeasurements. Numerical experiments demonstrate the stateofthe art meansquarederror and runtime of our approach.
Bilinear Generalized Approximate Message Passing
, 2013
"... Abstract—We extend the generalized approximate message passing (GAMP) approach, originally proposed for highdimensional generalizedlinear regression in the context of compressive sensing, to the generalizedbilinear case, which enables its application to matrix completion, robust PCA, dictionary l ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
Abstract—We extend the generalized approximate message passing (GAMP) approach, originally proposed for highdimensional generalizedlinear regression in the context of compressive sensing, to the generalizedbilinear case, which enables its application to matrix completion, robust PCA, dictionary learning, and related matrixfactorization problems. In the first part of the paper, we derive our Bilinear GAMP (BiGAMP) algorithm as an approximation of the sumproduct belief propagation algorithm in the highdimensional limit, where centrallimit theorem arguments and Taylorseries approximations apply, and under the assumption of statistically independent matrix entries with known priors. In addition, we propose an adaptive damping mechanism that aids convergence under finite problem sizes, an expectationmaximization (EM)based method to automatically tune the parameters of the assumed priors, and two rankselection strategies. In the second part of the paper, we discuss the specializations of EMBiGAMP to the problems of matrix completion, robust PCA, and dictionary learning, and present the results of an extensive empirical study comparing EMBiGAMP to stateoftheart algorithms on each problem. Our numerical results, using both synthetic and realworld datasets, demonstrate that EMBiGAMP yields excellent reconstruction accuracy (often best in class) while maintaining competitive runtimes and avoiding the need to tune algorithmic parameters. I.
Hyperspectral image unmixing via bilinear generalized approximate message passing
 Proc. SPIE
, 2013
"... In hyperspectral unmixing, the objective is to decompose an electromagnetic spectral dataset measured over M spectral bands and T pixels, into N constituent material spectra (or “endmembers”) with corresponding spatial abundances. In this paper, we propose a novel approach to hyperspectral unmixing ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
In hyperspectral unmixing, the objective is to decompose an electromagnetic spectral dataset measured over M spectral bands and T pixels, into N constituent material spectra (or “endmembers”) with corresponding spatial abundances. In this paper, we propose a novel approach to hyperspectral unmixing (i.e., joint estimation of endmembers and abundances) based on loopy belief propagation. In particular, we employ the bilinear generalized approximate message passing algorithm (BiGAMP), a recently proposed beliefpropagationbased approach to matrix factorization, in a “turbo ” framework that enables the exploitation of spectral coherence in the endmembers, as well as spatial coherence in the abundances. In conjunction, we propose an expectationmaximization (EM) technique that can be used to automatically tune the prior statistics assumed by turbo BiGAMP. Numerical experiments on synthetic and realworld data confirm the stateoftheart performance of our approach.
Sparse estimation with the swept approximated messagepassing algorithm,” Arxiv preprint arxiv:1406.4311
, 2014
"... Approximate Message Passing (AMP) has been shown to be a superior method for inference problems, such as the recovery of signals from sets of noisy, lowerdimensionality measurements, both in terms of reconstruction accuracy and in computational efficiency. However, AMP suffers from serious converge ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Approximate Message Passing (AMP) has been shown to be a superior method for inference problems, such as the recovery of signals from sets of noisy, lowerdimensionality measurements, both in terms of reconstruction accuracy and in computational efficiency. However, AMP suffers from serious convergence issues in contexts that do not exactly match its assumptions. We propose a new approach to stabilizing AMP in these contexts by applying AMP updates to individual coefficients rather than in parallel. Our results show that this change to the AMP iteration can provide theoretically expected, but hitherto unobtainable, performance for problems on which the standard AMP iteration diverges. Additionally, we find that the computational costs of this swept coefficient update scheme is not unduly burdensome, allowing it to be applied efficiently to signals of large dimensionality. I.
Message passing approaches to compressive inference under structured signal priors
, 2013
"... Across numerous disciplines, the ability to generate highdimensional datasets is driving an enormous demand for increasingly efficient ways of both capturing and processing this data. A promising recent trend for addressing these needs has developed from the recognition that, despite living in hi ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Across numerous disciplines, the ability to generate highdimensional datasets is driving an enormous demand for increasingly efficient ways of both capturing and processing this data. A promising recent trend for addressing these needs has developed from the recognition that, despite living in highdimensional ambient spaces, many datasets have vastly smaller intrinsic dimensionality. When capturing (sampling) such datasets, exploiting this realization permits one to dramatically reduce the number of samples that must be acquired without losing the salient features of the data. When processing such datasets, the reduced intrinsic dimensionality can be leveraged to allow reliable inferences to be made in scenarios where it is infeasible to collect the amount of data that would be required for inference using classical techniques. To date, most approaches for taking advantage of the low intrinsic dimensionality inherent in many datasets have focused on identifying succinct (i.e., sparse) representations of the data, seeking to represent the data using only a handful of “significant ” elements from an appropriately chosen dictionary. While powerful in