Results 1  10
of
157
Beyond Nyquist: Efficient Sampling of Sparse Bandlimited Signals
, 2009
"... Wideband analog signals push contemporary analogtodigital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the bandlimit, alt ..."
Abstract

Cited by 158 (18 self)
 Add to MetaCart
Wideband analog signals push contemporary analogtodigital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the bandlimit, although the locations of the frequencies may not be known a priori. For this type of sparse signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components. Let K denote the total number of frequencies in the signal, and let W denote its bandlimit in Hz. Simulations suggest that the random demodulator requires just O(K log(W/K)) samples per second to stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W Hz. In contrast with Nyquist sampling, one must use nonlinear methods, such as convex programming, to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system’s performance that supports the empirical observations.
Compressive Sensing and Structured Random Matrices
 RADON SERIES COMP. APPL. MATH XX, 1–95 © DE GRUYTER 20YY
, 2011
"... These notes give a mathematical introduction to compressive sensing focusing on recovery using ℓ1minimization and structured random matrices. An emphasis is put on techniques for proving probabilistic estimates for condition numbers of structured random matrices. Estimates of this type are key to ..."
Abstract

Cited by 157 (18 self)
 Add to MetaCart
These notes give a mathematical introduction to compressive sensing focusing on recovery using ℓ1minimization and structured random matrices. An emphasis is put on techniques for proving probabilistic estimates for condition numbers of structured random matrices. Estimates of this type are key to providing conditions that ensure exact or approximate recovery of sparse vectors using ℓ1minimization.
The Cosparse Analysis Model and Algorithms
, 2011
"... After a decade of extensive study of the sparse representation synthesis model, we can safely say that this is a mature and stable field, with clear theoretical foundations, and appealing applications. Alongside this approach, there is an analysis counterpart model, which, despite its similarity to ..."
Abstract

Cited by 66 (14 self)
 Add to MetaCart
After a decade of extensive study of the sparse representation synthesis model, we can safely say that this is a mature and stable field, with clear theoretical foundations, and appealing applications. Alongside this approach, there is an analysis counterpart model, which, despite its similarity to the synthesis alternative, is markedly different. Surprisingly, the analysis model did not get a similar attention, and its understanding today is shallow and partial. In this paper we take a closer look at the analysis approach, better define it as a generative model for signals, and contrast it with the synthesis one. This workproposeseffectivepursuitmethodsthat aimtosolveinverseproblemsregularized with the analysismodel prior, accompanied by a preliminary theoretical study of their performance. We demonstrate the effectiveness of the analysis model in several experiments.
Solving Inverse Problems with Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity
, 2010
"... A general framework for solving image inverse problems is introduced in this paper. The approach is based on Gaussian mixture models, estimated via a computationally efficient MAPEM algorithm. A dual mathematical interpretation of the proposed framework with structured sparse estimation is describe ..."
Abstract

Cited by 55 (8 self)
 Add to MetaCart
(Show Context)
A general framework for solving image inverse problems is introduced in this paper. The approach is based on Gaussian mixture models, estimated via a computationally efficient MAPEM algorithm. A dual mathematical interpretation of the proposed framework with structured sparse estimation is described, which shows that the resulting piecewise linear estimate stabilizes the estimation when compared to traditional sparse inverse problem techniques. This interpretation also suggests an effective dictionary motivated initialization for the MAPEM algorithm. We demonstrate that in a number of image inverse problems, including inpainting, zooming, and deblurring, the same algorithm produces either equal, often significantly better, or very small margin worse results than the best published ones, at a lower computational cost. 1 I.
Compressive Sensing
, 2010
"... Compressive sensing is a new type of sampling theory, which predicts that sparse signals and images can be reconstructed from what was previously believed to be incomplete information. As a main feature, efficient algorithms such as ℓ1minimization can be used for recovery. The theory has many poten ..."
Abstract

Cited by 50 (12 self)
 Add to MetaCart
Compressive sensing is a new type of sampling theory, which predicts that sparse signals and images can be reconstructed from what was previously believed to be incomplete information. As a main feature, efficient algorithms such as ℓ1minimization can be used for recovery. The theory has many potential applications in signal processing and imaging. This chapter gives an introduction and overview on both theoretical and numerical aspects of compressive sensing.
Learning with Compressible Priors
"... We describe a set of probability distributions, dubbed compressible priors, whose independent and identically distributed (iid) realizations result in pcompressible signals. A signal x ∈ R N is called pcompressible with magnitude R if its sorted coefficients exhibit a powerlaw decay as x(i) � ..."
Abstract

Cited by 43 (5 self)
 Add to MetaCart
We describe a set of probability distributions, dubbed compressible priors, whose independent and identically distributed (iid) realizations result in pcompressible signals. A signal x ∈ R N is called pcompressible with magnitude R if its sorted coefficients exhibit a powerlaw decay as x(i) � R · i −d, where the decay rate d is equal to 1/p. pcompressible signals live close to Ksparse signals (K ≪ N) in the ℓrnorm (r> p) since their best Ksparse approximation error decreases with O ( R · K 1/r−1/p). We show that the membership of generalized Pareto, Student’s t, lognormal, Fréchet, and loglogistic distributions to the set of compressible priors depends only on the distribution parameters and is independent of N. In contrast, we demonstrate that the membership of the generalized Gaussian distribution (GGD) depends both on the signal dimension and the GGD parameters: the expected decay rate of Nsample iid realizations from the GGD with the shape parameter q is given by 1 / [q log (N/q)]. As stylized examples, we show via experiments that the wavelet coefficients of natural images are 1.67compressible whereas their pixel gradients are 0.95 log (N/0.95)compressible, on the average. We also leverage the connections between compressible priors and sparse signals to develop new iterative reweighted sparse signal recovery algorithms that outperform the standard ℓ1norm minimization. Finally, we describe how to learn the hyperparameters of compressible priors in underdetermined regression problems by exploiting the geometry of their order statistics during signal recovery. 1
Surveying and comparing simultaneous sparse approximation (or grouplasso) algorithms
"... ..."
(Show Context)
Improved iteratively reweighted least squares for unconstrained smoothed lq minimization
 SIAM J. Numer. Anal
, 2013
"... Abstract. In this paper, we first study ℓq minimization and its associated iterative reweighted algorithm for recovering sparse vectors. Unlike most existing work, we focus on unconstrained ℓq minimization, for which we show a few advantages on noisy measurements and/or approximately sparse vectors. ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we first study ℓq minimization and its associated iterative reweighted algorithm for recovering sparse vectors. Unlike most existing work, we focus on unconstrained ℓq minimization, for which we show a few advantages on noisy measurements and/or approximately sparse vectors. Inspired by the results in [Daubechies et al., Comm. Pure Appl. Math., 63 (2010), pp. 1–38] for constrained ℓq minimization, we start with a preliminary yet novel analysis for unconstrained ℓq minimization, which includes convergence, error bound, and local convergence behavior. Then, the algorithm and analysis are extended to the recovery of lowrank matrices. The algorithms for both vector and matrix recovery have been compared to some stateoftheart algorithms and show superior performance on recovering sparse vectors and lowrank matrices.