Results 1  10
of
123
Simultaneously Structured Models with Application to Sparse and Lowrank Matrices
, 2014
"... The topic of recovery of a structured model given a small number of linear observations has been wellstudied in recent years. Examples include recovering sparse or groupsparse vectors, lowrank matrices, and the sum of sparse and lowrank matrices, among others. In various applications in signal p ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
The topic of recovery of a structured model given a small number of linear observations has been wellstudied in recent years. Examples include recovering sparse or groupsparse vectors, lowrank matrices, and the sum of sparse and lowrank matrices, among others. In various applications in signal processing and machine learning, the model of interest is known to be structured in several ways at the same time, for example, a matrix that is simultaneously sparse and lowrank. Often norms that promote each individual structure are known, and allow for recovery using an orderwise optimal number of measurements (e.g., `1 norm for sparsity, nuclear norm for matrix rank). Hence, it is reasonable to minimize a combination of such norms. We show that, surprisingly, if we use multiobjective optimization with these norms, then we can do no better, orderwise, than an algorithm that exploits only one of the present structures. This result suggests that to fully exploit the multiple structures, we need an entirely new convex relaxation, i.e. not one that is a function of the convex relaxations used for each structure. We then specialize our results to the case of sparse and lowrank matrices. We show that a nonconvex formulation of the problem can recover the model from very few measurements, which is on the order of the degrees of freedom of the matrix, whereas the convex problem obtained from a combination of the `1 and nuclear norms requires many more measurements. This proves an orderwise gap between the performance of the convex and nonconvex recovery problems in this case. Our framework applies to arbitrary structureinducing norms as well as to a wide range of measurement ensembles. This allows us to give performance bounds for problems such as sparse phase retrieval and lowrank tensor completion.
ExpectationMaximization GaussianMixture Approximate Message Passing
"... Abstract—When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal’s nonzero coefficients can have a profound affect on recovery meansquared error (MSE). If this distribution was apriori known, one could use efficient approximate message passing (AM ..."
Abstract

Cited by 40 (12 self)
 Add to MetaCart
(Show Context)
Abstract—When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal’s nonzero coefficients can have a profound affect on recovery meansquared error (MSE). If this distribution was apriori known, one could use efficient approximate message passing (AMP) techniques for nearly minimum MSE (MMSE) recovery. In practice, though, the distribution is unknown, motivating the use of robust algorithms like Lasso—which is nearly minimax optimal—at the cost of significantly larger MSE for nonleastfavorable distributions. As an alternative, we propose an empiricalBayesian technique that simultaneously learns the signal distribution while MMSErecovering the signal—according to the learned distribution—using AMP. In particular, we model the nonzero distribution as a Gaussian mixture, and learn its parameters through expectation maximization, using AMP to implement the expectation step. Numerical experiments confirm the stateoftheart performance of our approach on a range of 1 2 signal classes. I.
Y Eldar, Noise folding in compressed sensing
 IEEE Signal Process. Lett
, 2011
"... Abstract—The literature on compressed sensing has focused almost entirely on settings where the signal is noiseless and the measurements are contaminated by noise. In practice, however, the signal itself is often subject to random noise prior to measurement. We briefly study this setting and show th ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
Abstract—The literature on compressed sensing has focused almost entirely on settings where the signal is noiseless and the measurements are contaminated by noise. In practice, however, the signal itself is often subject to random noise prior to measurement. We briefly study this setting and show that, for the vast majority of measurement schemes employed in compressed sensing, the two models are equivalent with the important difference that the signaltonoise ratio (SNR) is divided by a factor proportional to,whereis the dimension of the signal and is the number of observations. Since is often large, this leads to noise folding which can have a severe impact on the SNR. Index Terms—Analog noise versus digital noise, compressed sensing, matching pursuit, noise folding, sparse signals. I.
Fixed points of generalized approximate message passing with arbitrary matrices
 in Proc. ISIT
, 2013
"... ar ..."
(Show Context)
Phase Retrieval with Application to Optical Imaging
, 2015
"... The problem of phase retrieval, i.e., the recovery of a function given the magnitude of its ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
The problem of phase retrieval, i.e., the recovery of a function given the magnitude of its
Breaking the coherence barrier: asymptotic incoherence and asymptotic sparsity in compressed sensing
, 2013
"... In this paper we bridge the substantial gap between existing compressed sensing theory and its current use in realworld applications. 1 We do so by introducing a new mathematical framework for overcoming the socalled coherence ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
(Show Context)
In this paper we bridge the substantial gap between existing compressed sensing theory and its current use in realworld applications. 1 We do so by introducing a new mathematical framework for overcoming the socalled coherence
Onebit compressed sensing with nongaussian measurements
, 2013
"... Abstract. In onebit compressed sensing, previous results state that sparse signals may be robustly recovered when the measurements are taken using Gaussian random vectors. In contrast to standard compressed sensing, these results are not extendable to natural nonGaussian distributions without furt ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
(Show Context)
Abstract. In onebit compressed sensing, previous results state that sparse signals may be robustly recovered when the measurements are taken using Gaussian random vectors. In contrast to standard compressed sensing, these results are not extendable to natural nonGaussian distributions without further assumptions, as can be demonstrated by simple counterexamples involving extremely sparse signals. We show that approximately sparse signals that are not extremely sparse can be accurately reconstructed from singlebit measurements sampled according to a subgaussian distribution, and the reconstruction comes as the solution to a convex program.
SubNyquist radar via Doppler focusing
 IEEE Transactions on Signal Processing
"... Abstract—We investigate the problem of a monostatic pulseDoppler radar transceiver trying to detect targets sparsely populated in the radar’s unambiguous timefrequency region. Several past works employ compressed sensing (CS) algorithms to this type of problem but either do not address sample rate ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
(Show Context)
Abstract—We investigate the problem of a monostatic pulseDoppler radar transceiver trying to detect targets sparsely populated in the radar’s unambiguous timefrequency region. Several past works employ compressed sensing (CS) algorithms to this type of problem but either do not address sample rate reduction, impose constraints on the radar transmitter, propose CS recovery methods with prohibitive dictionary size, or perform poorly in noisy conditions. Here, we describe a subNyquist sampling and recovery approach called Doppler focusing, which addresses all of these problems: it performs low rate sampling and digital processing, imposes no restrictions on the transmitter, and uses a CS dictionary with size, which does not increase with increasing number of pulses. Furthermore, in the presence of noise, Doppler focusing enjoys a signaltonoise ratio (SNR) improvement, which scales linearly with, obtaining good detection performance even at SNR as low as 25 dB. The recovery is based on the Xampling framework, which allows reduction of the number of samples needed to accurately represent the signal, directly in the analogtodigital conversion process. After sampling, the entire digital recovery process is performed on the low rate samples without having to return to the Nyquist rate. Finally, our approach can be implemented in hardware using a previously suggested Xampling radar prototype. Index Terms—Compressed sensing, rate of innovation, radar, sparse recovery, subNyquist sampling, delayDoppler estimation. I.