Results 1  10
of
107
Asymptotic analysis of MAP estimation via the replica method and applications to compressed sensing
, 2009
"... The replica method is a nonrigorous but widelyaccepted technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method to nonGaussian maximum a posteriori (MAP) estimation. It is shown that with random linear measureme ..."
Abstract

Cited by 77 (9 self)
 Add to MetaCart
(Show Context)
The replica method is a nonrigorous but widelyaccepted technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method to nonGaussian maximum a posteriori (MAP) estimation. It is shown that with random linear measurements and Gaussian noise, the asymptotic behavior of the MAP estimate of anndimensional vector “decouples ” asnscalar MAP estimators. The result is a counterpart to Guo and Verdú’s replica analysis of minimum meansquared error estimation. The replica MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, lasso, linear estimation with thresholding, and zero normregularized estimation. In the case of lasso estimation the scalar estimator reduces to a softthresholding operator, and for zero normregularized estimation it reduces to a hardthreshold. Among other benefits, the replica method provides a computationallytractable method for exactly computing various performance metrics including meansquared error and sparsity pattern recovery probability.
Information theoretic bounds for compressed sensing
 IEEE Trans. Inf. Theory
, 2010
"... In this paper we derive information theoretic performance bounds to sensing and reconstruction of sparse phenomena from noisy projections. We consider two settings: output noise models where the noise enters after the projection and input noise models where the noise enters before the projection. We ..."
Abstract

Cited by 44 (6 self)
 Add to MetaCart
(Show Context)
In this paper we derive information theoretic performance bounds to sensing and reconstruction of sparse phenomena from noisy projections. We consider two settings: output noise models where the noise enters after the projection and input noise models where the noise enters before the projection. We consider two types of distortion for reconstruction: support errors and meansquared errors. Our goal is to relate the number of measurements, m, and SNR, to signal sparsity, k, distortion level, d, and signal dimension, n. We consider support errors in a worstcase setting. We employ different variations of Fano’s inequality to derive necessary conditions on the number of measurements and SNR required for exact reconstruction. To derive sufficient conditions we develop new insights on maxlikelihood analysis based on a novel superposition property. In particular this property implies that small support errors are the dominant error events. Consequently, our ML analysis does not suffer the conservatism of the union bound and leads to a tighter analysis of maxlikelihood. These results provide orderwise tight bounds. For output noise models we show that asymptotically an SNR of Θ(log(n)) together with Θ(k log(n/k)) measurements is necessary and sufficient for exact support recovery. Furthermore, if a small fraction of support errors
Why Gabor frames? Two fundamental measures of coherence and their role in model selection
 J. Commun. Netw
, 2010
"... ar ..."
Sampling Bounds for Sparse Support Recovery in the Presence of Noise
"... It is well known that the support of a sparse signal can be recovered from a small number of random projections. However, in the presence of noise all known sufficient conditions require that the persample signaltonoise ratio (SNR) grows without bound with the dimension of the signal. If the nois ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
(Show Context)
It is well known that the support of a sparse signal can be recovered from a small number of random projections. However, in the presence of noise all known sufficient conditions require that the persample signaltonoise ratio (SNR) grows without bound with the dimension of the signal. If the noise is due to quantization of the samples, this means that an unbounded rate per sample is needed. In this paper, it is shown that an unbounded SNR is also a necessary condition for perfect recovery, but any fraction (less than one) of the support can be recovered with bounded SNR. This means that a finite rate per sample is sufficient for partial support recovery. Necessary and sufficient conditions are given for both stochastic and nonstochastic signal models. This problem arises in settings such as compressive sensing, model selection, and signal denoising.
Onoff random access channels: A compressed sensing framework
, 2009
"... This paper considers a simple on–off random multiple access channel, where n users communicate simultaneously to a single receiver over m degrees of freedom. Each user transmits with probability λ, where typically λn < m ≪ n, and the receiver must detect which users transmitted. We show that whe ..."
Abstract

Cited by 31 (6 self)
 Add to MetaCart
This paper considers a simple on–off random multiple access channel, where n users communicate simultaneously to a single receiver over m degrees of freedom. Each user transmits with probability λ, where typically λn < m ≪ n, and the receiver must detect which users transmitted. We show that when the codebook has i.i.d. Gaussian entries, detecting which users transmitted is mathematically equivalent to a certain sparsity detection problem considered in compressed sensing. Using recent sparsity results, we derive upper and lower bounds on the capacities of these channels. We show that common sparsity detection algorithms, such as lasso and orthogonal matching pursuit (OMP), can be used as tractable multiuser detection schemes and have significantly better performance than singleuser detection. These methods do achieve some near–far resistance but—at high signaltonoise ratios (SNRs)—may achieve capacities far below optimal maximum likelihood detection. We then present a new algorithm, called sequential OMP, that illustrates that iterative detection combined with power ordering or power shaping can significantly improve the high SNR performance. Sequential OMP is analogous to successive interference cancellation in the classic multiple access channel. Our results thereby provide insight into the roles of power control and multiuser detection on randomaccess signalling.
Support recovery with sparsely sampled free random matrices
 in Proc. IEEE Int. Symp. Inf. Theory, Saint
, 2011
"... Abstract—Consider a BernoulliGaussian complex nvector whose components are Vi = XiBi, with Xi ∼ CN (0, Px) and binary Bi mutually independent and iid across i. This random qsparse vector is multiplied by a square random matrix U, and a randomly chosen subset, of average size np, p ∈ [0, 1], of th ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Consider a BernoulliGaussian complex nvector whose components are Vi = XiBi, with Xi ∼ CN (0, Px) and binary Bi mutually independent and iid across i. This random qsparse vector is multiplied by a square random matrix U, and a randomly chosen subset, of average size np, p ∈ [0, 1], of the resulting vector components is then observed in additive Gaussian noise. We extend the scope of conventional noisy compressive sampling models where U is typically a matrix with iid components, to allow U satisfying a certain freeness condition. This class of matrices encompasses Haar matrices and other unitarily invariant matrices. We use the replica method and the decoupling principle of Guo and Verdú, as well as a number of information theoretic bounds, to study the inputoutput mutual information and the support recovery error rate in the limit of n → ∞. We also extend the scope of the large deviation approach of Rangan, Fletcher and Goyal and characterize the performance of a class of estimators encompassing thresholded linear MMSE and ℓ1 relaxation.
Optimal phase transitions in compressed sensing
 IEEE TRANS. INF. THEORY
, 2012
"... Compressed sensing deals with efficient recovery of analog signals from linear encodings. This paper presents a statistical study of compressed sensing by modeling the input signal as an i.i.d. process with known distribution. Three classes of encoders are considered, namely optimal nonlinear, opti ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
Compressed sensing deals with efficient recovery of analog signals from linear encodings. This paper presents a statistical study of compressed sensing by modeling the input signal as an i.i.d. process with known distribution. Three classes of encoders are considered, namely optimal nonlinear, optimal linear, and random linear encoders. Focusing on optimal decoders, we investigate the fundamental tradeoff between measurement rate and reconstruction fidelity gauged by error probability and noise sensitivity in the absence and presence of measurement noise, respectively. The optimal phasetransition threshold is determined as a functional of the input distribution and compared to suboptimal thresholds achieved by popular reconstruction algorithms. In particular, we show that Gaussian sensing matrices incur no penalty on the phasetransition threshold with respect to optimal nonlinear encoding. Our results also provide a rigorous justification of previous results based on replica heuristics in the weaknoise regime.
Compressive MUSIC: revisiting the link between compressive sensing and array signal processing
 IEEE Trans. on Information Theory
, 2012
"... Abstract—The multiple measurement vector (MMV) problem addresses the identification of unknown input vectors that share common sparse support. Even though MMV problems have been traditionally addressed within the context of sensor array signal processing, the recent trend is to apply compressive sen ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
(Show Context)
Abstract—The multiple measurement vector (MMV) problem addresses the identification of unknown input vectors that share common sparse support. Even though MMV problems have been traditionally addressed within the context of sensor array signal processing, the recent trend is to apply compressive sensing (CS) due to its capability to estimate sparse support even with an insufficient number of snapshots, in which case classical array signal processing fails. However, CS guarantees the accurate recovery in a probabilistic manner, which often shows inferior performance in the regime where the traditional array signal processing approaches succeed. The apparent dichotomy between the probabilistic CS and deterministic sensor array signal processing has not been fully understood. The main contribution of the present article is a unified approach that revisits the link between CS and array signal processing first unveiled in the mid 1990s by Feng and Bresler. The new algorithm, which we call compressive MUSIC, identifies the parts of support using CS, after which the remaining supports are estimated using a novel generalized MUSIC criterion. Using a large system MMV model, we show that our compressive MUSIC requires a smaller number of sensor elements for accurate support recovery than the existing CS methods and that it can approach the optimalbound with finite number of snapshots even in cases where the signals are linearly dependent. Index Terms—Compressive sensing, multiple measurement vector problem, joint sparsity, MUSIC, SOMP, thresholding. I.
Sample complexity for 1bit compressed sensing and sparse classification
 In Proceedings of the IEEE international symposium on information theory (ISIT
, 2010
"... ..."
(Show Context)