Results 1  10
of
56
Message passing algorithms for compressed sensing: I. motivation and construction
 Proc. ITW
, 2010
"... Abstract—In a recent paper, the authors proposed a new class of lowcomplexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements [1]. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the second of tw ..."
Abstract

Cited by 163 (19 self)
 Add to MetaCart
(Show Context)
Abstract—In a recent paper, the authors proposed a new class of lowcomplexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements [1]. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the second of two conference papers describing the derivation of these algorithms, connection with related literature, extensions of original framework, and new empirical evidence. This paper describes the state evolution formalism for analyzing these algorithms, and some of the conclusions that can be drawn from this formalism. We carried out extensive numerical simulations to confirm these predictions. We present here a few representative results. I. GENERAL AMP AND STATE EVOLUTION We consider the model
Bayesian compressive sensing via belief propagation
 IEEE Trans. Signal Processing
, 2010
"... Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, subNyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can comple ..."
Abstract

Cited by 125 (19 self)
 Add to MetaCart
Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, subNyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can complement conventional CS methods based on linear programming or greedy algorithms. We perform approximate Bayesian inference using belief propagation (BP) decoding, which represents the CS encoding matrix as a graphical model. Fast encoding and decoding is provided using sparse encoding matrices, which also improve BP convergence by reducing the presence of loops in the graph. To decode a lengthN signal containing K large coefficients, our CSBP decoding algorithm uses O(K log(N)) measurements and O(N log 2 (N)) computation. Finally, sparse encoding matrices and the CSBP decoding algorithm can be modified to support a variety of signal models and measurement noise. 1
Generalized Approximate Message Passing for Estimation with Random Linear Mixing
, 2012
"... We consider the estimation of an i.i.d. random vector observed through a linear transform followed by a componentwise, probabilistic (possibly nonlinear) measurement channel. A novel algorithm, called generalized approximate message passing (GAMP), is presented that provides computationally effici ..."
Abstract

Cited by 123 (18 self)
 Add to MetaCart
We consider the estimation of an i.i.d. random vector observed through a linear transform followed by a componentwise, probabilistic (possibly nonlinear) measurement channel. A novel algorithm, called generalized approximate message passing (GAMP), is presented that provides computationally efficient approximate implementations of maxsum and sumproblem loopy belief propagation for such problems. The algorithm extends earlier approximate message passing methods to incorporate arbitrary distributions on both the input and output of the transform and can be applied to a wide range of problems in nonlinear compressed sensing and learning. Extending an analysis by Bayati and Montanari, we argue that the asymptotic componentwise behavior of the GAMP method under large, i.i.d. Gaussian transforms is described by a simple set of state evolution (SE) equations. From the SE equations, one can exactly predict the asymptotic value of virtually any componentwise performance metric including meansquared error or detection accuracy. Moreover, the analysis is valid for arbitrary input and output distributions, even when the corresponding optimization problems are nonconvex. The results match predictions by Guo and Wang for relaxed belief propagation on large sparse matrices and, in certain instances, also agree with the optimal performance predicted by the replica method. The GAMP methodology thus provides a computationally efficient methodology, applicable to a large class of nonGaussian estimation problems with precise asymptotic performance guarantees.
Asymptotic analysis of MAP estimation via the replica method and applications to compressed sensing
, 2009
"... The replica method is a nonrigorous but widelyaccepted technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method to nonGaussian maximum a posteriori (MAP) estimation. It is shown that with random linear measureme ..."
Abstract

Cited by 77 (9 self)
 Add to MetaCart
(Show Context)
The replica method is a nonrigorous but widelyaccepted technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method to nonGaussian maximum a posteriori (MAP) estimation. It is shown that with random linear measurements and Gaussian noise, the asymptotic behavior of the MAP estimate of anndimensional vector “decouples ” asnscalar MAP estimators. The result is a counterpart to Guo and Verdú’s replica analysis of minimum meansquared error estimation. The replica MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, lasso, linear estimation with thresholding, and zero normregularized estimation. In the case of lasso estimation the scalar estimator reduces to a softthresholding operator, and for zero normregularized estimation it reduces to a hardthreshold. Among other benefits, the replica method provides a computationallytractable method for exactly computing various performance metrics including meansquared error and sparsity pattern recovery probability.
Turbo reconstruction of structured sparse signals
 in Proc. 44th Annual Conf. Information Sciences and Systems
, 2010
"... Abstract—This paper considers the reconstruction of structuredsparse signals from noisy linear observations. In particular, the support of the signal coefficients is parameterized by hidden binary pattern, and a structured probabilistic prior (e.g., Markov random chain/field/tree) is assumed on the ..."
Abstract

Cited by 59 (26 self)
 Add to MetaCart
(Show Context)
Abstract—This paper considers the reconstruction of structuredsparse signals from noisy linear observations. In particular, the support of the signal coefficients is parameterized by hidden binary pattern, and a structured probabilistic prior (e.g., Markov random chain/field/tree) is assumed on the pattern. Exact inference is discussed and an approximate inference scheme, based on loopy belief propagation (BP), is proposed. The proposed scheme iterates between exploitation of the observationstructure and exploitation of the patternstructure, and is closely related to noncoherent turbo equalization, as used in digital communication receivers. An algorithm that exploits the observation structure is then detailed based on approximate message passing ideas. The application of EXIT charts is discussed, and empirical phase transition plots are calculated for Markovchain structured sparsity. 1 I.
Estimation with Random Linear Mixing, Belief Propagation and Compressed Sensing
, 2010
"... We apply Guo and Wang’s relaxed belief propagation (BP) method to the estimation of a random vector from linear measurements followed by a componentwise probabilistic measurement channel. Relaxed BP uses a Gaussian approximation in standard BP to obtain significant computational savings for dense ..."
Abstract

Cited by 43 (10 self)
 Add to MetaCart
We apply Guo and Wang’s relaxed belief propagation (BP) method to the estimation of a random vector from linear measurements followed by a componentwise probabilistic measurement channel. Relaxed BP uses a Gaussian approximation in standard BP to obtain significant computational savings for dense measurement matrices. The main contribution of this paper is to extend the relaxed BP method and analysis to general (nonAWGN) output channels. Specifically, we present detailed equations for implementing relaxed BP for general channels and show that relaxed BP has an identical asymptotic large sparse limit behavior as standard BP, as predicted by the Guo and Wang’s state evolution (SE) equations. Applications are presented to compressed sensing and estimation with bounded noise.
Graphical Models Concepts in Compressed Sensing
"... This paper surveys recent work in applying ideas from graphical models and message passing algorithms to solve large scale regularized regression problems. In particular, the focus is on compressed sensing reconstruction via ℓ1 penalized leastsquares (known as LASSO or BPDN). We discuss how to deri ..."
Abstract

Cited by 37 (2 self)
 Add to MetaCart
This paper surveys recent work in applying ideas from graphical models and message passing algorithms to solve large scale regularized regression problems. In particular, the focus is on compressed sensing reconstruction via ℓ1 penalized leastsquares (known as LASSO or BPDN). We discuss how to derive fast approximate message passing algorithms to solve this problem. Surprisingly, the analysis of such algorithms allows to prove exact highdimensional limit results for the LASSO risk. This paper will appear as a chapter in a book on ‘Compressed Sensing ’ edited by Yonina Eldar and Gitta Kutynok. 1
The noisesensitivity phase transition in compressed sensing
, 2010
"... Consider the noisy underdetermined system of linear equations: y = Ax0 + z0, with n × N measurement matrix A, n < N, and Gaussian white noise z0 ∼ N(0, σ2I). Both y and A are known, both x0 and z0 are unknown, and we seek an approximation to x0. When x0 has few nonzeros, useful approximations are ..."
Abstract

Cited by 32 (3 self)
 Add to MetaCart
Consider the noisy underdetermined system of linear equations: y = Ax0 + z0, with n × N measurement matrix A, n < N, and Gaussian white noise z0 ∼ N(0, σ2I). Both y and A are known, both x0 and z0 are unknown, and we seek an approximation to x0. When x0 has few nonzeros, useful approximations are often obtained by ℓ1penalized ℓ2 minimization, in which the reconstruction ˆx 1,λ solves min ‖y − Ax‖2 2/2 + λ‖x‖1. Evaluate performance by meansquared error (MSE = Eˆx 1,λ − x0   2 2/N). Consider matrices A with iid Gaussian entries and a largesystem limit in which n, N → ∞ with n/N → δ and k/n → ρ. Call the ratio MSE/σ2 the noise sensitivity. We develop formal expressions for the MSE of ˆx 1,λ, and evaluate its worstcase formal noise sensitivity over all types of ksparse signals. The phase space 0 ≤ δ, ρ ≤ 1 is partitioned by curve ρ = ρMSE(δ) into two regions. Formal noise sensitivity is bounded throughout the region ρ < ρMSE(δ) and is unbounded throughout the region ρ> ρMSE(δ). The phase boundary ρ = ρMSE(δ) is identical to the previouslyknown phase transition curve for equivalence of ℓ1 − ℓ0 minimization in the ksparse noiseless case. Hence a single phase