Results 1  10
of
44
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
, 2007
"... A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combin ..."
Abstract

Cited by 427 (36 self)
 Add to MetaCart
A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combinatorial in nature, are there efficient methods for finding the sparsest solution? These questions have been answered positively and constructively in recent years, exposing a wide variety of surprising phenomena; in particular, the existence of easilyverifiable conditions under which optimallysparse solutions can be found by concrete, effective computational methods. Such theoretical results inspire a bold perspective on some important practical problems in signal and image processing. Several wellknown signal and image processing problems can be cast as demanding solutions of undetermined systems of equations. Such problems have previously seemed, to many, intractable. There is considerable evidence that these problems often have sparse solutions. Hence, advances in finding sparse solutions to underdetermined systems energizes research on such signal and image processing problems – to striking effect. In this paper we review the theoretical results on sparse solutions of linear systems, empirical
Informationtheoretic limits on sparsity recovery in the highdimensional and noisy setting
, 2007
"... Abstract—The problem of sparsity pattern or support set recovery refers to estimating the set of nonzero coefficients of an un3 p known vector 2 based on a set of n noisy observations. It arises in a variety of settings, including subset selection in regression, graphical model selection, signal de ..."
Abstract

Cited by 131 (2 self)
 Add to MetaCart
(Show Context)
Abstract—The problem of sparsity pattern or support set recovery refers to estimating the set of nonzero coefficients of an un3 p known vector 2 based on a set of n noisy observations. It arises in a variety of settings, including subset selection in regression, graphical model selection, signal denoising, compressive sensing, and constructive approximation. The sample complexity of a given method for subset recovery refers to the scaling of the required sample size n as a function of the signal dimension p, sparsity index k (number of nonzeroes in 3), as well as the minimum value min of 3 over its support and other parameters of measurement matrix. This paper studies the informationtheoretic limits of sparsity recovery: in particular, for a noisy linear observation model based on random measurement matrices drawn from general Gaussian measurement matrices, we derive both a set of sufficient conditions for exact support recovery using an exhaustive search decoder, as well as a set of necessary conditions that any decoder, regardless of its computational complexity, must satisfy for exact support recovery. This analysis of fundamental limits complements our previous work on sharp thresholds for support set recovery over the same set of random measurement ensembles using the polynomialtime Lasso method (`1constrained quadratic programming). Index Terms—Compressed sensing, `1relaxation, Fano’s method, highdimensional statistical inference, informationtheoretic
Necessary and sufficient conditions on sparsity pattern recovery
, 2009
"... The paper considers the problem of detecting the sparsity pattern of a ksparse vector in R n from m random noisy measurements. A new necessary condition on the number of measurements for asymptotically reliable detection with maximum likelihood (ML) estimation and Gaussian measurement matrices is ..."
Abstract

Cited by 106 (12 self)
 Add to MetaCart
(Show Context)
The paper considers the problem of detecting the sparsity pattern of a ksparse vector in R n from m random noisy measurements. A new necessary condition on the number of measurements for asymptotically reliable detection with maximum likelihood (ML) estimation and Gaussian measurement matrices is derived. This necessary condition for ML detection is compared against a sufficient condition for simple maximum correlation (MC) or thresholding algorithms. The analysis shows that the gap between thresholding and ML can be described by a simple expression in terms of the total signaltonoise ratio (SNR), with the gap growing with increasing SNR. Thresholding is also compared against the more sophisticated lasso and orthogonal matching pursuit (OMP) methods. At high SNRs, it is shown that the gap between lasso and OMP over thresholding is described by the range of powers of the nonzero component values of the unknown signals. Specifically, the key benefit of lasso and OMP over thresholding is the ability of lasso and OMP to detect signals with relatively small components.
Life Beyond Bases: The Advent of Frames (Part I)
, 2007
"... Redundancy is a common tool in our daily lives. Before we leave the house, we double and triplecheck that we turned off gas and lights, took our keys, and have money (at least those worrywarts among us do). When an important date is coming up, we drive our loved ones crazy by confirming “just onc ..."
Abstract

Cited by 72 (8 self)
 Add to MetaCart
Redundancy is a common tool in our daily lives. Before we leave the house, we double and triplecheck that we turned off gas and lights, took our keys, and have money (at least those worrywarts among us do). When an important date is coming up, we drive our loved ones crazy by confirming “just once more” they are on top of it. Of course, the reason we are doing that is to avoid a disaster by missing or forgetting something, not to drive our loved ones crazy. The same idea of removing doubt is present in signal representations. Given a signal, we represent it in another system, typically a basis, where its characteristics are more readily apparent in the transform coefficients. However, these representations are typically nonredundant, and thus corruption or loss of transform coefficients can be serious. In comes redundancy; we build a safety net into our representation so that we can avoid those disasters. The redundant counterpart of a basis is called a frame [no one seems to know why they are called frames, perhaps because of the bounds in (25)?]. It is generally acknowledged (at least in the signal processing and harmonic analysis communities) that frames were born in 1952 in the paper by Duffin and Schaeffer [32]. Despite being over half a century old, frames gained popularity only in the last decade, due mostly to the work of the three wavelet pioneers—Daubechies, Grossman, and Meyer [29]. Framelike ideas, that is, building redundancy into a signal expansion, can be found in pyramid
Measurements vs. bits: Compressed sensing meets information theory
 in Proc. Allerton Conf. on Comm., Control, and Computing
, 2006
"... Abstract — Compressed sensing is a new framework for acquiring sparse signals based on the revelation that a small number of linear projections (measurements) of the signal contain enough information for its reconstruction. The foundation of Compressed sensing is built on the availability of noisef ..."
Abstract

Cited by 48 (5 self)
 Add to MetaCart
(Show Context)
Abstract — Compressed sensing is a new framework for acquiring sparse signals based on the revelation that a small number of linear projections (measurements) of the signal contain enough information for its reconstruction. The foundation of Compressed sensing is built on the availability of noisefree measurements. However, measurement noise is unavoidable in analog systems and must be accounted for. We demonstrate that measurement noise is the crucial factor that dictates the number of measurements needed for reconstruction. To establish this result, we evaluate the information contained in the measurements by viewing the measurement system as an information theoretic channel. Combining the capacity of this channel with the ratedistortion function of the sparse signal, we lower bound the ratedistortion performance of a compressed sensing system. Our approach concisely captures the effect of measurement noise on the performance limits of signal reconstruction, thus enabling to benchmark the performance of specific reconstruction algorithms. I.
Information theoretic bounds for compressed sensing
 IEEE Trans. Inf. Theory
, 2010
"... In this paper we derive information theoretic performance bounds to sensing and reconstruction of sparse phenomena from noisy projections. We consider two settings: output noise models where the noise enters after the projection and input noise models where the noise enters before the projection. We ..."
Abstract

Cited by 44 (6 self)
 Add to MetaCart
(Show Context)
In this paper we derive information theoretic performance bounds to sensing and reconstruction of sparse phenomena from noisy projections. We consider two settings: output noise models where the noise enters after the projection and input noise models where the noise enters before the projection. We consider two types of distortion for reconstruction: support errors and meansquared errors. Our goal is to relate the number of measurements, m, and SNR, to signal sparsity, k, distortion level, d, and signal dimension, n. We consider support errors in a worstcase setting. We employ different variations of Fano’s inequality to derive necessary conditions on the number of measurements and SNR required for exact reconstruction. To derive sufficient conditions we develop new insights on maxlikelihood analysis based on a novel superposition property. In particular this property implies that small support errors are the dominant error events. Consequently, our ML analysis does not suffer the conservatism of the union bound and leads to a tighter analysis of maxlikelihood. These results provide orderwise tight bounds. For output noise models we show that asymptotically an SNR of Θ(log(n)) together with Θ(k log(n/k)) measurements is necessary and sufficient for exact support recovery. Furthermore, if a small fraction of support errors
A Frame Construction and A Universal Distortion Bound for Sparse Representations
"... We consider approximations of signals by the elements of a frame in a complex vector space of dimension N and formulate both the noiseless and the noisy sparse representation problems. The noiseless representation problem is to find sparse representations of a signal r given that such representatio ..."
Abstract

Cited by 35 (2 self)
 Add to MetaCart
(Show Context)
We consider approximations of signals by the elements of a frame in a complex vector space of dimension N and formulate both the noiseless and the noisy sparse representation problems. The noiseless representation problem is to find sparse representations of a signal r given that such representations exist. In this case, we explicitly construct a frame, referred to as the Vandermonde frame, for which the noiseless sparse representation problem can be solved uniquely using O(N²) operations, as long as the number of nonzero coefficients in the sparse representation of r is ɛN for some 0 ≤ ɛ ≤ 0.5. It is known that ɛ ≤ 0.5 cannot be relaxed without violating uniqueness. The noisy sparse representation problem is to find sparse representations of a signal r satisfying a distortion criterion. In this case, we establish a lower bound on the tradeoff between the sparsity of the representation, the underlying distortion and the redundancy of any given frame.
Sampling Bounds for Sparse Support Recovery in the Presence of Noise
"... It is well known that the support of a sparse signal can be recovered from a small number of random projections. However, in the presence of noise all known sufficient conditions require that the persample signaltonoise ratio (SNR) grows without bound with the dimension of the signal. If the nois ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
It is well known that the support of a sparse signal can be recovered from a small number of random projections. However, in the presence of noise all known sufficient conditions require that the persample signaltonoise ratio (SNR) grows without bound with the dimension of the signal. If the noise is due to quantization of the samples, this means that an unbounded rate per sample is needed. In this paper, it is shown that an unbounded SNR is also a necessary condition for perfect recovery, but any fraction (less than one) of the support can be recovered with bounded SNR. This means that a finite rate per sample is sufficient for partial support recovery. Necessary and sufficient conditions are given for both stochastic and nonstochastic signal models. This problem arises in settings such as compressive sensing, model selection, and signal denoising.