Results 1  10
of
47
A Probabilistic and RIPless Theory of Compressed Sensing
, 2010
"... This paper introduces a simple and very general theory of compressive sensing. In this theory, the sensing mechanism simply selects sensing vectors independently at random from a probability distribution F; it includes all models — e.g. Gaussian, frequency measurements — discussed in the literature, ..."
Abstract

Cited by 95 (3 self)
 Add to MetaCart
This paper introduces a simple and very general theory of compressive sensing. In this theory, the sensing mechanism simply selects sensing vectors independently at random from a probability distribution F; it includes all models — e.g. Gaussian, frequency measurements — discussed in the literature, but also provides a framework for new measurement strategies as well. We prove that if the probability distribution F obeys a simple incoherence property and an isotropy property, one can faithfully recover approximately sparse signals from a minimal number of noisy measurements. The novelty is that our recovery results do not require the restricted isometry property (RIP) — they make use of a much weaker notion — or a random model for the signal. As an example, the paper shows that a signal with s nonzero entries can be faithfully recovered from about s log n Fourier coefficients that are contaminated with noise.
Sensitivity to basis mismatch of compressed sensing,” preprint
, 2009
"... Abstract—The theory of compressed sensing suggests that successful inversion of an image of the physical world (e.g., a radar/sonar return or a sensor array snapshot vector) for the source modes and amplitudes can be achieved at measurement dimensions far lower than what might be expected from the c ..."
Abstract

Cited by 86 (8 self)
 Add to MetaCart
Abstract—The theory of compressed sensing suggests that successful inversion of an image of the physical world (e.g., a radar/sonar return or a sensor array snapshot vector) for the source modes and amplitudes can be achieved at measurement dimensions far lower than what might be expected from the classical theories of spectrum or modal analysis, provided that the image is sparse in an apriori known basis. For imaging problems in passive and active radar and sonar, this basis is usually taken to be a DFT basis. The compressed sensing measurements are then inverted using an ℓ1minimization principle (basis pursuit) for the nonzero source amplitudes. This seems to make compressed sensing an ideal image inversion principle for high resolution modal analysis. However, in reality no physical field is sparse in the DFT basis or in an apriori known basis. In fact the main goal in image inversion is to identify the modal structure. No matter how finely we grid the parameter space the sources may not lie in the center of the grid cells and there is always mismatch between the assumed and the actual bases for sparsity. In this paper, we study the sensitivity of basis pursuit to mismatch between the assumed and the actual sparsity bases and compare the performance of basis pursuit with that of classical image inversion. Our mathematical analysis and numerical examples show that the performance of basis pursuit degrades considerably in the presence of mismatch, and they suggest that the use of compressed sensing as a modal analysis principle requires more consideration and refinement, at least for the problem sizes common to radar/sonar. I.
1 Compressive Video Sampling with Approximate Message Passing Decoding
"... In this paper, we apply compressed sensing to video compression. Compressed sensing (CS) techniques exploit the observation that one needs much fewer random measurements than given by the ShannonNyquist sampling theory to recover an object if this object is compressible (i.e., sparse in the spatial ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
(Show Context)
In this paper, we apply compressed sensing to video compression. Compressed sensing (CS) techniques exploit the observation that one needs much fewer random measurements than given by the ShannonNyquist sampling theory to recover an object if this object is compressible (i.e., sparse in the spatial domain or in a transform domain). In the CS framework, we can achieve sensing, compression and denoising simultaneously. We propose a fast and simple online encoding by application of pseudorandom downsampling of the twodimensional fast Fourier transform to video frames. For offline decoding, we apply a modification of the recently proposed approximate message passing (AMP) algorithm. The AMP method has been derived using the statistical concept of ’state evolution’, and it has been shown to considerably accelerate the convergence rate in special CSdecoding applications. We shall prove that the AMP method can be rewritten as a forwardbackward splitting algorithm. This new representation enables us to give conditions that ensure convergence of the AMP method and to modify the algorithm in order to achieve higher robustness. The success of reconstruction methods for video decoding also essentially depends on the chosen transform, where sparsity of the video signals is assumed. We propose to incorporate the 3D dualtree complex wavelet transform that possesses sufficiently good properties of directional selectivity and shift invariance while being computationally less expensive and less redundant than other directional 3D wavelet transforms.
Performance bounds for expanderbased compressed sensing in poisson noise,” tech. rep
, 2010
"... ar ..."
(Show Context)
Exact and stable covariance estimation from quadratic sampling via convex programming. to appear
 IEEE Transactions on Information Theory
, 2015
"... Statistical inference and information processing of highdimensional data often require efficient and accurate estimation of their secondorder statistics. With rapidly changing data, limited processing power and storage at the sensor suite, it is desirable to extract the covariance structure from ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Statistical inference and information processing of highdimensional data often require efficient and accurate estimation of their secondorder statistics. With rapidly changing data, limited processing power and storage at the sensor suite, it is desirable to extract the covariance structure from a single pass over the data stream and a small number of measurements. In this paper, we explore a quadratic random measurement model which imposes a minimal memory requirement and low computational complexity during the sampling process, and is shown to be optimal in preserving lowdimensional covariance structures. Specifically, four popular structural assumptions of covariance matrices, namely low rank, Toeplitz low rank, sparsity, jointly rankone and sparse structure, are investigated. We show that a covariance matrix with either structure can be perfectly recovered from a nearoptimal number of subGaussian quadratic measurements, via efficient convex relaxation algorithms for the respective structure. The proposed algorithm has a variety of potential applications in streaming data processing, highfrequency wireless communication, phase space tomography in optics, noncoherent subspace detection, etc. Our method admits universally accurate covariance estimation in the absence of noise, as soon as the number of measurements exceeds the theoretic sampling limits. We also demonstrate the robustness of this approach against noise and imperfect structural assumptions. Our analysis is established upon a novel notion called the mixednorm restricted isometry property (RIP`2/`1), as well as the conventional RIP`2/`2 for nearisotropic and bounded measurements. Besides, our results improve upon bestknown phase retrieval (including both dense and sparse signals) guarantees using PhaseLift with a significantly simpler approach. 1
Support recovery of sparse signals via multipleaccess communication techniques
 IEEE Trans. Inf. Theory
, 2011
"... Abstract—In this paper, we consider the problem of exact support recovery of sparse signals via noisy linear measurements. The main focus is finding the sufficient and necessary condition on the number of measurements for support recovery to be reliable. By drawing an analogy between the problem of ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we consider the problem of exact support recovery of sparse signals via noisy linear measurements. The main focus is finding the sufficient and necessary condition on the number of measurements for support recovery to be reliable. By drawing an analogy between the problem of support recovery and the problem of channel coding over the Gaussian multipleaccess channel (MAC), and exploiting mathematical tools developed for the latter problem, we obtain an informationtheoretic framework for analyzing the performance limits of support recovery. Specifically, when the number of nonzero entries of the sparse signal is held fixed, the exact asymptotics on the number of measurements sufficient and necessary for support recovery is characterized. In addition, we show that the proposed methodology can deal with a variety of models of sparse signal recovery, hence demonstrating its potential as an effective analytical tool. Index Terms—Compressed sensing, Gaussian multipleaccess channel (MAC), noisy linear measurement, performance tradeoff,
MODELBASED SKETCHING AND RECOVERY WITH EXPANDERS
"... Abstract. It is well known that sparse signals can be succinctly represented by certain lowdimensional linear sketches with applications in compressive sensing, data streaming and graphsketching, among others. Recently, structured sparsity has emerged as a promising new tool for reducing sketch si ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
Abstract. It is well known that sparse signals can be succinctly represented by certain lowdimensional linear sketches with applications in compressive sensing, data streaming and graphsketching, among others. Recently, structured sparsity has emerged as a promising new tool for reducing sketch size and improving recovery. By structured sparsity, we mean that the sparse coefficients exhibit further correlations as determined by a model. Existing work on sketching structured sparse signals requires dense sketching matrices that satisfy the 2norm restricted isometry property. On the other hand, sparse sketching matrices, usually from expanders, are computationally much more efficient, easier to store and apply in recovery. In this paper, we focus on modelbased expanders, that is expanders that capture a given structure sparsity model, and show that they exist for a larger class of models than previously considered. We present the first polynomial time algorithm for recovering structured sparse signals from lowdimensional linear sketches obtained via sparse matrices. The algorithm is guaranteed to yield signals with bounded recovery error and is quite easy to implement and customize for structured sparse models that are endowed with a “projection ” operator. As a result, we characterize a broad class of structured sparsity models that have polynomial time projection property. We also provide numerical experiments to illustrate the theoretical results in action.
Onebit Compressed Sensing: Provable Support and Vector Recovery
"... In this paper, we study the problem of onebit compressed sensing (1bit CS), where the goal is to design a measurement matrix A and a recovery algorithm such that a ksparse unit vector x ∗ can be efficiently recovered from the sign of its linear measurements, i.e., b = sign(Ax ∗). This is an import ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
In this paper, we study the problem of onebit compressed sensing (1bit CS), where the goal is to design a measurement matrix A and a recovery algorithm such that a ksparse unit vector x ∗ can be efficiently recovered from the sign of its linear measurements, i.e., b = sign(Ax ∗). This is an important problem for signal acquisition and has several learning applications as well, e.g., multilabel classification (Hsu et al., 2009). We study this problem in two settings: a) support recovery: recover the support of x ∗ , b) approximate vector recovery: recover a unit vector ˆx such that ‖ˆx − x∗‖2 ≤ ɛ. For support recovery, we propose two novel and efficient solutions based on two combinatorial structures: union free families of sets and expanders. In contrast to existing methods for support recovery, our methods are universal i.e. a single measurement matrix A can recover all the signals. For approximate recovery, we propose the first method to recover a sparse vector using a near optimal number of measurements. We also empirically validate our algorithms and demonstrate that our algorithms recover the true signal using fewer measurements than the existing methods. Proceedings of the 30 th