Results 1  10
of
113
ExpectationMaximization GaussianMixture Approximate Message Passing
"... Abstract—When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal’s nonzero coefficients can have a profound affect on recovery meansquared error (MSE). If this distribution was apriori known, one could use efficient approximate message passing (AM ..."
Abstract

Cited by 41 (15 self)
 Add to MetaCart
(Show Context)
Abstract—When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal’s nonzero coefficients can have a profound affect on recovery meansquared error (MSE). If this distribution was apriori known, one could use efficient approximate message passing (AMP) techniques for nearly minimum MSE (MMSE) recovery. In practice, though, the distribution is unknown, motivating the use of robust algorithms like Lasso—which is nearly minimax optimal—at the cost of significantly larger MSE for nonleastfavorable distributions. As an alternative, we propose an empiricalBayesian technique that simultaneously learns the signal distribution while MMSErecovering the signal—according to the learned distribution—using AMP. In particular, we model the nonzero distribution as a Gaussian mixture, and learn its parameters through expectation maximization, using AMP to implement the expectation step. Numerical experiments confirm the stateoftheart performance of our approach on a range of 1 2 signal classes. I.
Simultaneously Structured Models with Application to Sparse and Lowrank Matrices
, 2014
"... The topic of recovery of a structured model given a small number of linear observations has been wellstudied in recent years. Examples include recovering sparse or groupsparse vectors, lowrank matrices, and the sum of sparse and lowrank matrices, among others. In various applications in signal p ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
The topic of recovery of a structured model given a small number of linear observations has been wellstudied in recent years. Examples include recovering sparse or groupsparse vectors, lowrank matrices, and the sum of sparse and lowrank matrices, among others. In various applications in signal processing and machine learning, the model of interest is known to be structured in several ways at the same time, for example, a matrix that is simultaneously sparse and lowrank. Often norms that promote each individual structure are known, and allow for recovery using an orderwise optimal number of measurements (e.g., `1 norm for sparsity, nuclear norm for matrix rank). Hence, it is reasonable to minimize a combination of such norms. We show that, surprisingly, if we use multiobjective optimization with these norms, then we can do no better, orderwise, than an algorithm that exploits only one of the present structures. This result suggests that to fully exploit the multiple structures, we need an entirely new convex relaxation, i.e. not one that is a function of the convex relaxations used for each structure. We then specialize our results to the case of sparse and lowrank matrices. We show that a nonconvex formulation of the problem can recover the model from very few measurements, which is on the order of the degrees of freedom of the matrix, whereas the convex problem obtained from a combination of the `1 and nuclear norms requires many more measurements. This proves an orderwise gap between the performance of the convex and nonconvex recovery problems in this case. Our framework applies to arbitrary structureinducing norms as well as to a wide range of measurement ensembles. This allows us to give performance bounds for problems such as sparse phase retrieval and lowrank tensor completion.
Y Eldar, Noise folding in compressed sensing
 IEEE Signal Process. Lett
, 2011
"... Abstract—The literature on compressed sensing has focused almost entirely on settings where the signal is noiseless and the measurements are contaminated by noise. In practice, however, the signal itself is often subject to random noise prior to measurement. We briefly study this setting and show th ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
(Show Context)
Abstract—The literature on compressed sensing has focused almost entirely on settings where the signal is noiseless and the measurements are contaminated by noise. In practice, however, the signal itself is often subject to random noise prior to measurement. We briefly study this setting and show that, for the vast majority of measurement schemes employed in compressed sensing, the two models are equivalent with the important difference that the signaltonoise ratio (SNR) is divided by a factor proportional to,whereis the dimension of the signal and is the number of observations. Since is often large, this leads to noise folding which can have a severe impact on the SNR. Index Terms—Analog noise versus digital noise, compressed sensing, matching pursuit, noise folding, sparse signals. I.
Fixed points of generalized approximate message passing with arbitrary matrices
 in Proc. ISIT
, 2013
"... ar ..."
(Show Context)
Breaking the coherence barrier: A new theory for compressed sensing. arXiv:1302.0561
, 2014
"... This paper provides an important extension of compressed sensing which bridges a substantial gap between ..."
Abstract

Cited by 17 (9 self)
 Add to MetaCart
This paper provides an important extension of compressed sensing which bridges a substantial gap between
Phase Retrieval with Application to Optical Imaging
, 2015
"... The problem of phase retrieval, i.e., the recovery of a function given the magnitude of its ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
The problem of phase retrieval, i.e., the recovery of a function given the magnitude of its
Breaking the coherence barrier: asymptotic incoherence and asymptotic sparsity in compressed sensing
, 2013
"... In this paper we bridge the substantial gap between existing compressed sensing theory and its current use in realworld applications. 1 We do so by introducing a new mathematical framework for overcoming the socalled coherence ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
In this paper we bridge the substantial gap between existing compressed sensing theory and its current use in realworld applications. 1 We do so by introducing a new mathematical framework for overcoming the socalled coherence
Onebit compressed sensing with nongaussian measurements
, 2013
"... Abstract. In onebit compressed sensing, previous results state that sparse signals may be robustly recovered when the measurements are taken using Gaussian random vectors. In contrast to standard compressed sensing, these results are not extendable to natural nonGaussian distributions without furt ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
(Show Context)
Abstract. In onebit compressed sensing, previous results state that sparse signals may be robustly recovered when the measurements are taken using Gaussian random vectors. In contrast to standard compressed sensing, these results are not extendable to natural nonGaussian distributions without further assumptions, as can be demonstrated by simple counterexamples involving extremely sparse signals. We show that approximately sparse signals that are not extremely sparse can be accurately reconstructed from singlebit measurements sampled according to a subgaussian distribution, and the reconstruction comes as the solution to a convex program.
Scalable frames
 Linear Algebra and its Applications, 438(5):2225 – 2238
, 2013
"... Abstract. Tight frames can be characterized as those frames which possess optimal numerical stability properties. In this paper, we consider the question of modifying a general frame to generate a tight frame by rescaling its frame vectors; a process which can also be regarded as perfect preconditio ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Tight frames can be characterized as those frames which possess optimal numerical stability properties. In this paper, we consider the question of modifying a general frame to generate a tight frame by rescaling its frame vectors; a process which can also be regarded as perfect preconditioning of a frame by a diagonal operator. A frame is called scalable, if such a diagonal operator exists. We derive various characterizations of scalable frames, thereby including the infinitedimensional situation. Finally, we provide a geometric interpretation of scalability in terms of conical surfaces. 1.