Results 1  10
of
27
Accurate Prediction of Phase Transitions in Compressed Sensing via a Connection to Minimax Denoising
, 2012
"... Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula that characterizes the allowed undersampling of generalized sparse ob ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
(Show Context)
Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula that characterizes the allowed undersampling of generalized sparse objects. The formula applies to Approximate Message Passing (AMP) algorithms for compressed sensing, which are here generalized to employ denoising operators besides the traditional scalar soft thresholding denoiser. This paper gives several examples including scalar denoisers not derived from convex penalization – the firm shrinkage nonlinearity and the minimax nonlinearity – and also nonscalar denoisers – block thresholding, monotone regression, and total variation minimization. Let the variables ε = k/N and δ = n/N denote the generalized sparsity and undersampling fractions for sampling the kgeneralizedsparse Nvector x0 according to y = Ax0. Here A is an n × N measurement matrix whose entries are iid standard Gaussian. The formula states that the phase transition curve δ = δ(ε) separating successful from unsuccessful reconstruction of x0
Optimal phase transitions in compressed sensing
 IEEE TRANS. INF. THEORY
, 2012
"... Compressed sensing deals with efficient recovery of analog signals from linear encodings. This paper presents a statistical study of compressed sensing by modeling the input signal as an i.i.d. process with known distribution. Three classes of encoders are considered, namely optimal nonlinear, opti ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
Compressed sensing deals with efficient recovery of analog signals from linear encodings. This paper presents a statistical study of compressed sensing by modeling the input signal as an i.i.d. process with known distribution. Three classes of encoders are considered, namely optimal nonlinear, optimal linear, and random linear encoders. Focusing on optimal decoders, we investigate the fundamental tradeoff between measurement rate and reconstruction fidelity gauged by error probability and noise sensitivity in the absence and presence of measurement noise, respectively. The optimal phasetransition threshold is determined as a functional of the input distribution and compared to suboptimal thresholds achieved by popular reconstruction algorithms. In particular, we show that Gaussian sensing matrices incur no penalty on the phasetransition threshold with respect to optimal nonlinear encoding. Our results also provide a rigorous justification of previous results based on replica heuristics in the weaknoise regime.
Universality in Polytope Phase Transitions and Message Passing Algorithms
, 2012
"... We consider a class of nonlinear mappings FA,N in R N indexed by symmetric random matrices A ∈ R N×N with independent entries. Within spin glass theory, special cases of these mappings correspond to iterating the TAP equations and were studied by Erwin Bolthausen. Within information theory, they are ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
We consider a class of nonlinear mappings FA,N in R N indexed by symmetric random matrices A ∈ R N×N with independent entries. Within spin glass theory, special cases of these mappings correspond to iterating the TAP equations and were studied by Erwin Bolthausen. Within information theory, they are known as ‘approximate message passing ’ algorithms. We study the highdimensional (large N) behavior of the iterates of F for polynomial functions F, and prove that it is universal, i.e. it depends only on the first two moments of the entries of A, under a subgaussian tail condition. As an application, we prove the universality of a certain phase transition arising in polytope geometry and compressed sensing. This solves –for a broad class of random projections – a conjecture by David Donoho and Jared Tanner. 1 Introduction and main results Let A ∈ RN×N be a random Wigner matrix, i.e. a random matrix with i.i.d. entries Aij satisfying E{Aij} = 0 and E{A2 ij} = 1/N. Considerable effort has been devoted to studying the distribution of the eigenvalues of such a matrix [AGZ09, BS05, TV12]. The universality phenomenon is a striking recurring theme in these studies. Roughly speaking, many asymptotic properties of the joint eigenvalues
The squarederror of generalized LASSO: A precise analysis
 In 51st Annual Allerton Conference on Communication, Control, and Computing, Allerton Park & Retreat
"... We consider the problem of estimating an unknown signal x0 from noisy linear observations y = Ax0 + z ∈ Rm. In many practical instances of this problem, x0 has a certain structure that can be captured by a structure inducing function f (·). For example, `1 norm can be used to encourage a sparse solu ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
We consider the problem of estimating an unknown signal x0 from noisy linear observations y = Ax0 + z ∈ Rm. In many practical instances of this problem, x0 has a certain structure that can be captured by a structure inducing function f (·). For example, `1 norm can be used to encourage a sparse solution. To estimate x0 with the aid of a convex f (·), we consider three variations of the widely used LASSO estimator and provide sharp characterizations of their performances. Our study falls under a generic framework, where the entries of the measurement matrix A and the noise vector z have zeromean normal distributions with variances 1 and σ2, respectively. For the LASSO estimator x∗, we ask: “What is the precise estimation error as a function of the noise level σ, the number of observations m and the structure of the signal?". In particular, we attempt to calculate the Normalized Square Error (NSE) defined as ‖x ∗−x0‖22 σ2. We show that, the structure of the signal x0 and choice of the function f (·) enter the error formulae through the summary parameters D f (x0,R+) and D f (x0,λ), which are defined as the “Gaussian squareddistances ” to the subdifferential cone and to the λscaled subdifferential of f at x0, respectively. The first estimator assumes apriori knowledge of f (x0) and is given by arg minx {‖y−Ax‖2 subject to f (x) ≤ f (x0)}. We prove that its worst case NSE is achieved when σ → 0 and concentrates around D f (x0,R+)m−D f (x0,R+). Secondly, we consider arg minx {‖y−Ax‖2 + λ f (x)}, for
Subsampling at information theoretically optimal rates
 IEEE Intl. Symp. on Inform. Theory
, 2012
"... Abstract—We study the problem of sampling a random signal with sparse support in frequency domain. Shannon famously considered a scheme that instantaneously samples the signal at equispaced times. He proved that the signal can be reconstructed as long as the sampling rate exceeds twice the bandwidth ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
Abstract—We study the problem of sampling a random signal with sparse support in frequency domain. Shannon famously considered a scheme that instantaneously samples the signal at equispaced times. He proved that the signal can be reconstructed as long as the sampling rate exceeds twice the bandwidth (Nyquist rate). Candès, Romberg, Tao introduced a scheme that acquires instantaneous samples of the signal at random times. They proved that the signal can be uniquely and efficiently reconstructed, provided the sampling rate exceeds the frequency support of the signal, times logarithmic factors. In this paper we consider a probabilistic model for the signal, and a sampling scheme inspired by the idea of spatial coupling in coding theory. Namely, we propose to acquire noninstantaneous samples at random times. Mathematically, this is implemented by acquiring a small random subset of Gabor coefficients. We show empirically that this scheme achieves correct reconstruction as soon as the sampling rate exceeds the frequency support of the signal, thus reaching the information theoretic limit. I.
Minimum Complexity Pursuit for Universal Compressed Sensing
, 2012
"... The nascent field of compressed sensing is founded on the fact that highdimensional signals with “simple structure” can be recovered accurately from just a small number of randomized samples. Several specific kinds of structures have been explored in the literature, from sparsity and group sparsity ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
The nascent field of compressed sensing is founded on the fact that highdimensional signals with “simple structure” can be recovered accurately from just a small number of randomized samples. Several specific kinds of structures have been explored in the literature, from sparsity and group sparsity to lowrankedness. However, two fundamental questions have been left unanswered, namely: What are the general abstract meanings of “structure ” and “simplicity”? And do there exist universal algorithms for recovering such simple structured objects from fewer samples than their ambient dimension? In this paper, we address these two questions. Using algorithmic information theory tools such as the Kolmogorov complexity, we provide a unified definition of structure and simplicity. Leveraging this new definition, we develop and analyze an abstract algorithm for signal recovery motivated by Occam’s Razor. Minimum complexity pursuit (MCP) requires just O(κ log n) randomized samples to recover a signal of complexity κ and ambient dimension n. We also discuss the performance of MCP in the presence of measurement noise and with approximately simple signals.
Compressive CFAR radar detection
 In Proc. IEEE Radar Conference (RADAR
, 2012
"... Abstract—In this paper we develop the first Compressive Sensing (CS) adaptive radar detector. We propose three novel architectures and demonstrate how a classical Constant False Alarm Rate (CFAR) detector can be combined with ℓ1norm minimization. Using asymptotic arguments and the Complex Approxima ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Abstract—In this paper we develop the first Compressive Sensing (CS) adaptive radar detector. We propose three novel architectures and demonstrate how a classical Constant False Alarm Rate (CFAR) detector can be combined with ℓ1norm minimization. Using asymptotic arguments and the Complex Approximate Message Passing (CAMP) algorithm we characterize the statistics of the ℓ1norm reconstruction error and derive closed form expressions for both the detection and false alarm probabilities. We support our theoretical findings with a range of experiments that show that our theoretical conclusions hold even in nonasymptotic setting. We also report on the results from a radar measurement campaign, where we designed ad hoc transmitted waveforms to obtain a set of CS frequency measurements. We compare the performance of our new detection schemes using Receiver Operating Characteristic (ROC) curves. I.
Eldar, “Conditions for Target Recovery in Spatial Compressive Sensing for MIMO Radar
, 2013
"... We study compressive sensing in the spatial domain for target localization in terms of direction of arrival (DOA), using multipleinput multipleoutput (MIMO) radar. A sparse localization framework is proposed for a MIMO array in which transmit/receive elements are placed at random. This allows to d ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We study compressive sensing in the spatial domain for target localization in terms of direction of arrival (DOA), using multipleinput multipleoutput (MIMO) radar. A sparse localization framework is proposed for a MIMO array in which transmit/receive elements are placed at random. This allows to dramatically reduce the number of elements, while still attaining performance comparable to that of a filled (Nyquist) array. Leveraging properties of a (structured) random measurement matrix, we develop a novel bound on the coherence of the measurement matrix, and we obtain conditions under which the measurement matrix satisfies the socalled isotropy property. The coherence and isotropy concepts are used to establish respectively uniform and nonuniform recovery guarantees for target localization using spatial compressive sensing. In particular, nonuniform recovery is guaranteed if the number of degrees of freedom (the product of the number of transmit and receive elements ��) scales with � (log �) 2,where�is the number of targets, and � is proportional to the array aperture and determines the angle resolution. The significance of the logarithmic dependence in � is that the proposed framework enables high resolution with a small number of MIMO radar elements. This is in contrast with a filled virtual MIMO array where the product � � scales linearly with �. Index Terms — Compressive sensing, MIMO radar, random arrays, direction of arrival estimation. 1.
Asymptotically Exact Denoising in Relation to Compressed Sensing
"... We consider the denoising problem where we wish to estimate a structured signal x0 from corrupted observations y = x0 + z. Typical structures include sparsity, block sparsity and low rankness. We use a structure inducing convex function f and solve minx 1 2 ‖y−x‖22 +λf(x) to estimate x0. For example ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We consider the denoising problem where we wish to estimate a structured signal x0 from corrupted observations y = x0 + z. Typical structures include sparsity, block sparsity and low rankness. We use a structure inducing convex function f and solve minx 1 2 ‖y−x‖22 +λf(x) to estimate x0. For example, f(·) is the `1 norm for sparse vectors, `1 − `2 norm for blocksparse signals and it is the nuclear norm for low rank matrices. When the noise vector z is i.i.d. Gaussian, we show that the normalized estimation error (MSE) of the optimally tuned problem coincides with the compressed sensing phase transitions, i.e., the number ∆f (x0) so that one needs m> ∆f (x0) compressed observations Ax0 ∈ Rm to recover x0 by solving minAx=Ax0 f(x). ∆f (x0) can be given as an explicit formula based on the subdifferential of f(·) at x0. We then connect our results to the generalized LASSO problem in which we have m noisy compressed observations y = Ax0 + z ∈ Rm and solve minf(x)≤f(x0) ‖y−Ax‖22. We show that, certain properties of
Variational Bayesian Algorithm for Quantized Compressed Sensing
, 2013
"... Compressed sensing (CS) is on recovery of high dimensional signals from their low dimensional linear measurements under a sparsity prior and digital quantization of the measurement data is inevitable in practical implementation of CS algorithms. In the existing literature, the quantization error i ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Compressed sensing (CS) is on recovery of high dimensional signals from their low dimensional linear measurements under a sparsity prior and digital quantization of the measurement data is inevitable in practical implementation of CS algorithms. In the existing literature, the quantization error is modeled typically as additive noise and the multibit and 1bit quantized CS problems are dealt with separately using different treatments and procedures. In this paper, a novel variational Bayesian inference based CS algorithm is presented, which unifies the multi and 1bit CS processing and is applicable to various cases of noiseless/noisy environment and unsaturated/saturated quantizer. By decoupling the quantization error from the measurement noise, the quantization error is modeled as a random variable and estimated jointly with the signal being recovered. Such a novel characterization of the quantization error results in superior performance of the algorithm which is demonstrated by extensive simulations in comparison with stateoftheart methods for both multibit and 1bit CS problems.