Results 1  10
of
123
Asymptotic analysis of MAP estimation via the replica method and applications to compressed sensing
, 2009
"... The replica method is a nonrigorous but widelyaccepted technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method to nonGaussian maximum a posteriori (MAP) estimation. It is shown that with random linear measureme ..."
Abstract

Cited by 77 (9 self)
 Add to MetaCart
(Show Context)
The replica method is a nonrigorous but widelyaccepted technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method to nonGaussian maximum a posteriori (MAP) estimation. It is shown that with random linear measurements and Gaussian noise, the asymptotic behavior of the MAP estimate of anndimensional vector “decouples ” asnscalar MAP estimators. The result is a counterpart to Guo and Verdú’s replica analysis of minimum meansquared error estimation. The replica MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, lasso, linear estimation with thresholding, and zero normregularized estimation. In the case of lasso estimation the scalar estimator reduces to a softthresholding operator, and for zero normregularized estimation it reduces to a hardthreshold. Among other benefits, the replica method provides a computationallytractable method for exactly computing various performance metrics including meansquared error and sparsity pattern recovery probability.
InformationTheoretically Optimal Compressed Sensing via Spatial Coupling and Approximate Message Passing
, 2011
"... We study the compressed sensing reconstruction problem for a broad class of random, banddiagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by Krzakala et al. [KMS+ 11], message passing algorithms ca ..."
Abstract

Cited by 51 (5 self)
 Add to MetaCart
(Show Context)
We study the compressed sensing reconstruction problem for a broad class of random, banddiagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by Krzakala et al. [KMS+ 11], message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of nonzero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate δ exceeds the (upper) Rényi information dimension of the signal, d(pX). More precisely, for a sequence of signals of diverging dimension n whose empirical distribution converges to pX, reconstruction is with high probability successful from d(pX) n + o(n) measurements taken according to a band diagonal matrix. For sparse signals, i.e. sequences of dimension n and k(n) nonzero entries, this implies reconstruction from k(n)+o(n) measurements. For ‘discrete ’ signals, i.e. signals whose coordinates take a fixed finite set of values, this implies reconstruction from o(n) measurements. The result
Accurate Prediction of Phase Transitions in Compressed Sensing via a Connection to Minimax Denoising
, 2012
"... Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula that characterizes the allowed undersampling of generalized sparse ob ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
(Show Context)
Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula that characterizes the allowed undersampling of generalized sparse objects. The formula applies to Approximate Message Passing (AMP) algorithms for compressed sensing, which are here generalized to employ denoising operators besides the traditional scalar soft thresholding denoiser. This paper gives several examples including scalar denoisers not derived from convex penalization – the firm shrinkage nonlinearity and the minimax nonlinearity – and also nonscalar denoisers – block thresholding, monotone regression, and total variation minimization. Let the variables ε = k/N and δ = n/N denote the generalized sparsity and undersampling fractions for sampling the kgeneralizedsparse Nvector x0 according to y = Ax0. Here A is an n × N measurement matrix whose entries are iid standard Gaussian. The formula states that the phase transition curve δ = δ(ε) separating successful from unsuccessful reconstruction of x0
ExpectationMaximization GaussianMixture Approximate Message Passing
"... Abstract—When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal’s nonzero coefficients can have a profound affect on recovery meansquared error (MSE). If this distribution was apriori known, one could use efficient approximate message passing (AM ..."
Abstract

Cited by 40 (12 self)
 Add to MetaCart
(Show Context)
Abstract—When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal’s nonzero coefficients can have a profound affect on recovery meansquared error (MSE). If this distribution was apriori known, one could use efficient approximate message passing (AMP) techniques for nearly minimum MSE (MMSE) recovery. In practice, though, the distribution is unknown, motivating the use of robust algorithms like Lasso—which is nearly minimax optimal—at the cost of significantly larger MSE for nonleastfavorable distributions. As an alternative, we propose an empiricalBayesian technique that simultaneously learns the signal distribution while MMSErecovering the signal—according to the learned distribution—using AMP. In particular, we model the nonzero distribution as a Gaussian mixture, and learn its parameters through expectation maximization, using AMP to implement the expectation step. Numerical experiments confirm the stateoftheart performance of our approach on a range of 1 2 signal classes. I.
Graphical Models Concepts in Compressed Sensing
"... This paper surveys recent work in applying ideas from graphical models and message passing algorithms to solve large scale regularized regression problems. In particular, the focus is on compressed sensing reconstruction via ℓ1 penalized leastsquares (known as LASSO or BPDN). We discuss how to deri ..."
Abstract

Cited by 37 (2 self)
 Add to MetaCart
This paper surveys recent work in applying ideas from graphical models and message passing algorithms to solve large scale regularized regression problems. In particular, the focus is on compressed sensing reconstruction via ℓ1 penalized leastsquares (known as LASSO or BPDN). We discuss how to derive fast approximate message passing algorithms to solve this problem. Surprisingly, the analysis of such algorithms allows to prove exact highdimensional limit results for the LASSO risk. This paper will appear as a chapter in a book on ‘Compressed Sensing ’ edited by Yonina Eldar and Gitta Kutynok. 1
Expectationmaximization BernoulliGaussian approximate message passing
 in Proc. Asilomar Conf. Signals Syst. Comput
, 2011
"... Abstract—The approximate message passing (AMP) algorithm originally proposed by Donoho, Maleki, and Montanari yields a computationally attractive solution to the usual ℓ1regularized leastsquares problem faced in compressed sensing, whose solution is known to be robust to the signal distribution. W ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
(Show Context)
Abstract—The approximate message passing (AMP) algorithm originally proposed by Donoho, Maleki, and Montanari yields a computationally attractive solution to the usual ℓ1regularized leastsquares problem faced in compressed sensing, whose solution is known to be robust to the signal distribution. When the signal is drawn i.i.d from a marginal distribution that is not leastfavorable, better performance can be attained using a Bayesian variation of AMP. The latter, however, assumes that the distribution is perfectly known. In this paper, we navigate the space between these two extremes by modeling the signal as i.i.d BernoulliGaussian (BG) with unknown prior sparsity, mean, and variance, and the noise as zeromean Gaussian with unknown variance, and we simultaneously reconstruct the signal while learning the prior signal and noise parameters. To accomplish this task, we embed the BGAMP algorithm within an expectationmaximization (EM) framework. Numerical experiments confirm the excellent performance of our proposed EMBGAMP on a range of signal types. 12 I.
Compressive Phase Retrieval via Generalized Approximate Message Passing
"... Abstract—In this paper, we propose a novel approach to compressive phase retrieval based on loopy belief propagation and, in particular, on the generalized approximate message passing (GAMP) algorithm. Numerical results show that the proposed PRGAMP algorithm has excellent phasetransition behavior ..."
Abstract

Cited by 30 (8 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we propose a novel approach to compressive phase retrieval based on loopy belief propagation and, in particular, on the generalized approximate message passing (GAMP) algorithm. Numerical results show that the proposed PRGAMP algorithm has excellent phasetransition behavior, noise robustness, and runtime. In particular, for successful recovery of synthetic BernoullicircularGaussian signals, PRGAMP requires ≈ 4 times the number of measurements as a phaseoracle version of GAMP and, at moderate to large SNR, the NMSE of PRGAMP is only ≈ 3 dB worse than that of phaseoracle GAMP. A comparison to the recently proposed convexrelation approach known as “CPRL ” reveals PRGAMP’s superior phase transition and ordersofmagnitude faster runtimes, especially as the problem dimensions increase. When applied to the recovery of a 65kpixel grayscale image from 32k randomly masked magnitude measurements, numerical results show a median PRGAMP runtime of only 13.4 seconds. A. Phase retrieval I.
Universality in Polytope Phase Transitions and Message Passing Algorithms
, 2012
"... We consider a class of nonlinear mappings FA,N in R N indexed by symmetric random matrices A ∈ R N×N with independent entries. Within spin glass theory, special cases of these mappings correspond to iterating the TAP equations and were studied by Erwin Bolthausen. Within information theory, they are ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
We consider a class of nonlinear mappings FA,N in R N indexed by symmetric random matrices A ∈ R N×N with independent entries. Within spin glass theory, special cases of these mappings correspond to iterating the TAP equations and were studied by Erwin Bolthausen. Within information theory, they are known as ‘approximate message passing ’ algorithms. We study the highdimensional (large N) behavior of the iterates of F for polynomial functions F, and prove that it is universal, i.e. it depends only on the first two moments of the entries of A, under a subgaussian tail condition. As an application, we prove the universality of a certain phase transition arising in polytope geometry and compressed sensing. This solves –for a broad class of random projections – a conjecture by David Donoho and Jared Tanner. 1 Introduction and main results Let A ∈ RN×N be a random Wigner matrix, i.e. a random matrix with i.i.d. entries Aij satisfying E{Aij} = 0 and E{A2 ij} = 1/N. Considerable effort has been devoted to studying the distribution of the eigenvalues of such a matrix [AGZ09, BS05, TV12]. The universality phenomenon is a striking recurring theme in these studies. Roughly speaking, many asymptotic properties of the joint eigenvalues
Phase Retrieval via Wirtinger Flow: Theory and Algorithms
, 2014
"... We study the problem of recovering the phase from magnitude measurements; specifically, we wish to reconstruct a complexvalued signal x ∈ Cn about which we have phaseless samples of the form yr = ∣⟨ar,x⟩∣2, r = 1,...,m (knowledge of the phase of these samples would yield a linear system). This pape ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
We study the problem of recovering the phase from magnitude measurements; specifically, we wish to reconstruct a complexvalued signal x ∈ Cn about which we have phaseless samples of the form yr = ∣⟨ar,x⟩∣2, r = 1,...,m (knowledge of the phase of these samples would yield a linear system). This paper develops a nonconvex formulation of the phase retrieval problem as well as a concrete solution algorithm. In a nutshell, this algorithm starts with a careful initialization obtained by means of a spectral method, and then refines this initial estimate by iteratively applying novel update rules, which have low computational complexity, much like in a gradient descent scheme. The main contribution is that this algorithm is shown to rigorously allow the exact retrieval of phase information from a nearly minimal number of random measurements. Indeed, the sequence of successive iterates provably converges to the solution at a geometric rate so that the proposed scheme is efficient both in terms of computational and data resources. In theory, a variation on this scheme leads to a nearlinear time algorithm for a physically realizable model based on coded diffraction patterns. We illustrate the effectiveness of our methods with various experiments on image data. Underlying our analysis are insights for the analysis of nonconvex optimization schemes that may have implications for computational problems beyond phase retrieval.
State Evolution for General Approximate Message Passing Algorithms, with Applications to Spatial Coupling
, 2012
"... We consider a class of approximated message passing (AMP) algorithms and characterize their highdimensional behavior in terms of a suitable state evolution recursion. Our proof applies to Gaussian matrices with independent but not necessarily identically distributed entries. It covers – in particul ..."
Abstract

Cited by 23 (7 self)
 Add to MetaCart
(Show Context)
We consider a class of approximated message passing (AMP) algorithms and characterize their highdimensional behavior in terms of a suitable state evolution recursion. Our proof applies to Gaussian matrices with independent but not necessarily identically distributed entries. It covers – in particular – the analysis of generalized AMP, introduced by Rangan, and of AMP reconstruction in compressed sensing with spatially coupled sensing matrices. The proof technique builds on the one of [BM11], while simplifying and generalizing several steps. 1