Results 1  10
of
41
Generalized Approximate Message Passing for Estimation with Random Linear Mixing
, 2012
"... We consider the estimation of an i.i.d. random vector observed through a linear transform followed by a componentwise, probabilistic (possibly nonlinear) measurement channel. A novel algorithm, called generalized approximate message passing (GAMP), is presented that provides computationally effici ..."
Abstract

Cited by 123 (18 self)
 Add to MetaCart
We consider the estimation of an i.i.d. random vector observed through a linear transform followed by a componentwise, probabilistic (possibly nonlinear) measurement channel. A novel algorithm, called generalized approximate message passing (GAMP), is presented that provides computationally efficient approximate implementations of maxsum and sumproblem loopy belief propagation for such problems. The algorithm extends earlier approximate message passing methods to incorporate arbitrary distributions on both the input and output of the transform and can be applied to a wide range of problems in nonlinear compressed sensing and learning. Extending an analysis by Bayati and Montanari, we argue that the asymptotic componentwise behavior of the GAMP method under large, i.i.d. Gaussian transforms is described by a simple set of state evolution (SE) equations. From the SE equations, one can exactly predict the asymptotic value of virtually any componentwise performance metric including meansquared error or detection accuracy. Moreover, the analysis is valid for arbitrary input and output distributions, even when the corresponding optimization problems are nonconvex. The results match predictions by Guo and Wang for relaxed belief propagation on large sparse matrices and, in certain instances, also agree with the optimal performance predicted by the replica method. The GAMP methodology thus provides a computationally efficient methodology, applicable to a large class of nonGaussian estimation problems with precise asymptotic performance guarantees.
InformationTheoretically Optimal Compressed Sensing via Spatial Coupling and Approximate Message Passing
, 2011
"... We study the compressed sensing reconstruction problem for a broad class of random, banddiagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by Krzakala et al. [KMS+ 11], message passing algorithms ca ..."
Abstract

Cited by 51 (5 self)
 Add to MetaCart
We study the compressed sensing reconstruction problem for a broad class of random, banddiagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by Krzakala et al. [KMS+ 11], message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of nonzero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate δ exceeds the (upper) Rényi information dimension of the signal, d(pX). More precisely, for a sequence of signals of diverging dimension n whose empirical distribution converges to pX, reconstruction is with high probability successful from d(pX) n + o(n) measurements taken according to a band diagonal matrix. For sparse signals, i.e. sequences of dimension n and k(n) nonzero entries, this implies reconstruction from k(n)+o(n) measurements. For ‘discrete ’ signals, i.e. signals whose coordinates take a fixed finite set of values, this implies reconstruction from o(n) measurements. The result
Accurate Prediction of Phase Transitions in Compressed Sensing via a Connection to Minimax Denoising
, 2012
"... Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula that characterizes the allowed undersampling of generalized sparse ob ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula that characterizes the allowed undersampling of generalized sparse objects. The formula applies to Approximate Message Passing (AMP) algorithms for compressed sensing, which are here generalized to employ denoising operators besides the traditional scalar soft thresholding denoiser. This paper gives several examples including scalar denoisers not derived from convex penalization – the firm shrinkage nonlinearity and the minimax nonlinearity – and also nonscalar denoisers – block thresholding, monotone regression, and total variation minimization. Let the variables ε = k/N and δ = n/N denote the generalized sparsity and undersampling fractions for sampling the kgeneralizedsparse Nvector x0 according to y = Ax0. Here A is an n × N measurement matrix whose entries are iid standard Gaussian. The formula states that the phase transition curve δ = δ(ε) separating successful from unsuccessful reconstruction of x0
Graphical Models Concepts in Compressed Sensing
"... This paper surveys recent work in applying ideas from graphical models and message passing algorithms to solve large scale regularized regression problems. In particular, the focus is on compressed sensing reconstruction via ℓ1 penalized leastsquares (known as LASSO or BPDN). We discuss how to deri ..."
Abstract

Cited by 37 (2 self)
 Add to MetaCart
(Show Context)
This paper surveys recent work in applying ideas from graphical models and message passing algorithms to solve large scale regularized regression problems. In particular, the focus is on compressed sensing reconstruction via ℓ1 penalized leastsquares (known as LASSO or BPDN). We discuss how to derive fast approximate message passing algorithms to solve this problem. Surprisingly, the analysis of such algorithms allows to prove exact highdimensional limit results for the LASSO risk. This paper will appear as a chapter in a book on ‘Compressed Sensing ’ edited by Yonina Eldar and Gitta Kutynok. 1
Asymptotic analysis of complex LASSO via complex approximate message passing
 IEEE Trans. Inf. Theory
, 2011
"... Recovering a sparse signal from an undersampled set of random linear measurements is the main problem of interest in compressed sensing. In this paper, we consider the case where both the signal and the measurements are complexvalued. We study the popular reconstruction method of ℓ1regularized lea ..."
Abstract

Cited by 27 (9 self)
 Add to MetaCart
(Show Context)
Recovering a sparse signal from an undersampled set of random linear measurements is the main problem of interest in compressed sensing. In this paper, we consider the case where both the signal and the measurements are complexvalued. We study the popular reconstruction method of ℓ1regularized least squares or LASSO. While several studies have shown that the LASSO algorithm offers desirable solutions under certain conditions, the precise asymptotic performance of this algorithm in the complex setting is not yet known. In this paper, we extend the approximate message passing (AMP) algorithm to the complexvalued signals and measurements to obtain the complex approximate message passing algorithm (CAMP). We then generalize the state evolution framework recently introduced for the analysis of AMP, to the complex setting. Using the state evolution, we derive accurate formulas for the phase transition and noise sensitivity of both LASSO and CAMP. Our results are theoretically proved for the case of i.i.d. Gaussian sensing matrices. But we confirm through simulations that our results hold for larger class of random matrices. 1
State Evolution for General Approximate Message Passing Algorithms, with Applications to Spatial Coupling
, 2012
"... We consider a class of approximated message passing (AMP) algorithms and characterize their highdimensional behavior in terms of a suitable state evolution recursion. Our proof applies to Gaussian matrices with independent but not necessarily identically distributed entries. It covers – in particul ..."
Abstract

Cited by 23 (7 self)
 Add to MetaCart
We consider a class of approximated message passing (AMP) algorithms and characterize their highdimensional behavior in terms of a suitable state evolution recursion. Our proof applies to Gaussian matrices with independent but not necessarily identically distributed entries. It covers – in particular – the analysis of generalized AMP, introduced by Rangan, and of AMP reconstruction in compressed sensing with spatially coupled sensing matrices. The proof technique builds on the one of [BM11], while simplifying and generalizing several steps. 1
Dynamic Compressive Sensing of TimeVarying Signals via Approximate Message Passing
, 2013
"... In this work the dynamic compressive sensing (CS) problem of recovering sparse, correlated, timevarying signals from subNyquist, nonadaptive, linear measurements is explored from a Bayesian perspective. While there has been a handful of previously proposed Bayesian dynamic CS algorithms in the li ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
In this work the dynamic compressive sensing (CS) problem of recovering sparse, correlated, timevarying signals from subNyquist, nonadaptive, linear measurements is explored from a Bayesian perspective. While there has been a handful of previously proposed Bayesian dynamic CS algorithms in the literature, the ability to perform inference on highdimensional problems in a computationally efficient manner remains elusive. In response, we propose a probabilistic dynamic CS signal model that captures both amplitude and support correlation structure, and describe an approximate message passing algorithm that performs soft signal estimation and support detection with a computational complexity that is linear in all problem dimensions. The algorithm, DCSAMP, can perform either causal filtering or noncausal smoothing, and is capable of learning model parameters adaptively from the data through an expectationmaximization learning procedure. We provide numerical evidence that DCSAMP performs within 3 dB of oracle bounds on synthetic data under a variety of operating conditions. We further describe the result of applying DCSAMP to two real dynamic CS datasets, as well as a frequency estimation task, to bolster our claim that DCSAMP is capable of offering stateoftheart performance and speed on realworld highdimensional problems.
Compressive imaging via approximate message passing with image denoising,” arXiv:1405.4429
, 2014
"... We consider compressive imaging problems, where images are reconstructed from a reduced number of linear measurements. Our objective is to improve over existing compressive imaging algorithms in terms of both reconstruction error and runtime. To pursue our objective, we propose compressive imaging ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
(Show Context)
We consider compressive imaging problems, where images are reconstructed from a reduced number of linear measurements. Our objective is to improve over existing compressive imaging algorithms in terms of both reconstruction error and runtime. To pursue our objective, we propose compressive imaging algorithms that employ the approximate message passing (AMP) algorithm. AMP is an iterative signal reconstruction algorithm that performs scalar denoising at each iteration; in order for AMP to reconstruct the original input signal well, a good scalar denoiser must be used. We apply two wavelet based image denoisers within AMP. The first denoiser is the “amplitudescaleinvariant Bayes estimator ” (ABE), and the second is an adaptive Wiener filter; we call our AMP based algorithms for compressive imaging AMPABE and AMPWiener. Numerical results show that both AMPABE and AMPWiener significantly improve over the state of the art in terms of runtime. In terms of reconstruction quality, AMPWiener offers lower mean square error (MSE) than existing compressive imaging algorithms. In contrast, AMPABE has higher MSE, because ABE does not denoise as well as the adaptive Wiener filter.
Regularized Modified BPDN for Noisy Sparse Reconstruction with Partial Erroneous Support and Signal Value Knowledge
"... We study the problem of sparse reconstruction from noisy undersampled measurements when the following two things are available. (1) We are given partial, and partly erroneous, knowledge of the signal’s support, denoted by T. (2) We are also given an erroneous estimate of the signal values on T, deno ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
(Show Context)
We study the problem of sparse reconstruction from noisy undersampled measurements when the following two things are available. (1) We are given partial, and partly erroneous, knowledge of the signal’s support, denoted by T. (2) We are also given an erroneous estimate of the signal values on T, denoted by(ˆµ)T. In practice, both of these may be available from available prior knowledge. Alternatively, in recursive reconstruction applications, like realtime dynamic MRI, one can use the support estimate and the signal value estimate from the previous time instant as T and (ˆµ)T. In this work, we introduce regularized modifiedBPDN (regmodBPDN) to solve this problem and obtain computable bounds on its reconstruction error. RegmodBPDN tries to find the signal that is sparsest outside the set T, while being “close enough ” to (ˆµ)T on T and while satisfying the data constraint. Corresponding results for modifiedBPDN and BPDN follow as direct corollaries. A second key contribution is an approach to obtain computable error bounds that hold without any sufficient conditions. This makes it easy to compare the bounds for the various approaches. Empirical reconstruction error comparisons with many existing approaches are also provided. Index Terms compressive sensing, sparse reconstruction, modifiedCS, partially known support