• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit,” (2009)

by D Needell, R Vershynin
Venue:Found. Comput. Math.,
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 188
Next 10 →

Iterative hard thresholding for compressed sensing

by Thomas Blumensath, Mike E. Davies - Appl. Comp. Harm. Anal
"... Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery probl ..."
Abstract - Cited by 329 (18 self) - Add to MetaCart
Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) • It gives near-optimal error guarantees. • It is robust to observation noise. • It succeeds with a minimum number of observations. • It can be used with any sampling operator for which the operator and its adjoint can be computed. • The memory requirement is linear in the problem size. Preprint submitted to Elsevier 28 January 2009 • Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. • It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. • Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.
(Show Context)

Citation Context

...rsuit [10], which was analysed as a reconstruction algorithm for compressed sensing in [19]. Better theoretical properties were recently proven for a regularised Orthogonal Matching Pursuit algorithm =-=[15]-=- [14]. Even more recently, the Compressive Sampling Matching Pursuit (CoSaMP) [13] and the nearly identical Subspace Pursuit [6] algorithms were introduced and analysed for compressed sensing signal r...

Sparsest solutions of underdetermined linear systems via ℓ

by Simon Foucart, Ming-jun Lai
"... We present a condition on the matrix of an underdetermined linear system which guarantees that the solution of the system with minimal ℓq-quasinorm is also the sparsest one. This generalizes, and sightly improves, a similar result for the ℓ1-norm. We then introduce a simple numerical scheme to compu ..."
Abstract - Cited by 192 (11 self) - Add to MetaCart
We present a condition on the matrix of an underdetermined linear system which guarantees that the solution of the system with minimal ℓq-quasinorm is also the sparsest one. This generalizes, and sightly improves, a similar result for the ℓ1-norm. We then introduce a simple numerical scheme to compute solutions with minimal ℓq-quasinorm, and we study its convergence. Finally, we display the results of some experiments which indicate that the ℓq-method performs better than other available methods. 1
(Show Context)

Citation Context

...n, but is in fact exact. Finally, we compare in Section 5 our ℓq-algorithm with four existing methods: the orthogonal greedy algorithm, see e.g. [13], the regularized orthogonal matching pursuit, see =-=[12]-=-, the ℓ1-minimization, and the reweighted ℓ1-minimization, see [6]. The last two, as well as our ℓq-algorithm, use the ℓ1-magic software available on Candès’ web page. It comes as a small surprise tha...

Beyond Nyquist: Efficient Sampling of Sparse Bandlimited Signals

by Joel A. Tropp , Jason N. Laska, Marco F. Duarte, Justin K. Romberg, Richard G. Baraniuk , 2009
"... Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the bandlimit, alt ..."
Abstract - Cited by 158 (18 self) - Add to MetaCart
Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the bandlimit, although the locations of the frequencies may not be known a priori. For this type of sparse signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components. Let K denote the total number of frequencies in the signal, and let W denote its bandlimit in Hz. Simulations suggest that the random demodulator requires just O(K log(W/K)) samples per second to stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W Hz. In contrast with Nyquist sampling, one must use nonlinear methods, such as convex programming, to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system’s performance that supports the empirical observations.

Combining geometry and combinatorics: a unified approach to sparse signal recovery

by R. Berinde, A. C. Gilbert, P. Indyk, H. Karloff, M. J. Strauss , 2008
"... There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix Φ and then uses linear programming to decode information about x from Φx. The combinatorial approach constructs Φ an ..."
Abstract - Cited by 157 (14 self) - Add to MetaCart
There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix Φ and then uses linear programming to decode information about x from Φx. The combinatorial approach constructs Φ and a combinatorial decoding algorithm to match. We present a unified approach to these two classes of sparse signal recovery algorithms. The unifying elements are the adjacency matrices of high-quality unbalanced expanders. We generalize the notion of Restricted Isometry Property (RIP), crucial to compressed sensing results for signal recovery, from the Euclidean norm to the ℓp norm for p ≈ 1, and then show that unbalanced expanders are essentially equivalent to RIP-p matrices. From known deterministic constructions for such matrices, we obtain new deterministic measurement matrix constructions and algorithms for signal recovery which, compared to previous deterministic algorithms, are superior in either the number of measurements or in noise tolerance.

Compressive Sensing and Structured Random Matrices

by Holger Rauhut - RADON SERIES COMP. APPL. MATH XX, 1–95 © DE GRUYTER 20YY , 2011
"... These notes give a mathematical introduction to compressive sensing focusing on recovery using ℓ1-minimization and structured random matrices. An emphasis is put on techniques for proving probabilistic estimates for condition numbers of structured random matrices. Estimates of this type are key to ..."
Abstract - Cited by 157 (18 self) - Add to MetaCart
These notes give a mathematical introduction to compressive sensing focusing on recovery using ℓ1-minimization and structured random matrices. An emphasis is put on techniques for proving probabilistic estimates for condition numbers of structured random matrices. Estimates of this type are key to providing conditions that ensure exact or approximate recovery of sparse vectors using ℓ1-minimization.

High-Resolution Radar via Compressed Sensing

by Matthew A. Herman, Thomas Strohmer , 2008
"... A stylized compressed sensing radar is proposed in which the time-frequency plane is discretized into an N ×N grid. Assuming the number of targets K is small (i.e., K ≪ N 2), then we can transmit a sufficiently “incoherent ” pulse and employ the techniques of compressed sensing to reconstruct the ta ..."
Abstract - Cited by 153 (9 self) - Add to MetaCart
A stylized compressed sensing radar is proposed in which the time-frequency plane is discretized into an N ×N grid. Assuming the number of targets K is small (i.e., K ≪ N 2), then we can transmit a sufficiently “incoherent ” pulse and employ the techniques of compressed sensing to reconstruct the target scene. A theoretical upper bound on the sparsity K is presented. Numerical simulations verify that even better performance can be achieved in practice. This novel compressed sensing approach offers great potential for better resolution over classical radar.

Greedy solution of ill-posed problems: Error bounds and exact inversion

by L Denis, D A Lorenz, D Trede , 2009
"... ..."
Abstract - Cited by 131 (8 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...e same object) and of course related to the correlations. Finally, a further direction of research may be to investigate other types of pursuit algorithms like regularized orthogonal matching pursuit =-=[34]-=- or CoSAMP [33]. Acknowledgments Dirk Lorenz acknowldges support by the DFG under grant LO 1436/2-1 within the Priority Program SPP 1324 “Extraction of quatitative information in complex systems”. Den...

Compressed Sensing: Theory and Applications

by Gitta Kutyniok , 2012
"... Compressed sensing is a novel research area, which was introduced in 2006, and since then has already become a key concept in various areas of applied mathematics, com-puter science, and electrical engineering. It surprisingly predicts that high-dimensional signals, which allow a sparse representati ..."
Abstract - Cited by 120 (30 self) - Add to MetaCart
Compressed sensing is a novel research area, which was introduced in 2006, and since then has already become a key concept in various areas of applied mathematics, com-puter science, and electrical engineering. It surprisingly predicts that high-dimensional signals, which allow a sparse representation by a suitable basis or, more generally, a frame, can be recovered from what was previously considered highly incomplete linear measurements by using efficient algorithms. This article shall serve as an introduction to and a survey about compressed sensing. Key Words. Dimension reduction. Frames. Greedy algorithms. Ill-posed inverse problems. `1 minimization. Random matrices. Sparse approximation. Sparse recovery.
(Show Context)

Citation Context

...solution of (P0) satisfying ‖x‖0 < 12(1 + µ(A)−1). Then OMP with error threshold ε = 0 recovers x. Other prominent examples of greedy algorithms are stagewise OMP (StOMP) [28], regularized OMP (ROMP) =-=[51]-=-, and compressive sampling MP (CoSamp) [50]. For a survey of these methods, we wish to refer to [32, Chapter 8]. A very recent development are message passing algorithms for compressed sensing pioneer...

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

by Deanna Needell, Roman Vershynin , 2007
"... We demonstrate a simple greedy algorithm that can reliably recover a vector v ∈ R d from incomplete and inaccurate measurements x = Φv + e. Here Φ is a N × d measurement matrix with N ≪ d, and e is an error vector. Our algorithm, Regularized Orthogonal Matching Pursuit (ROMP), seeks to close the ga ..."
Abstract - Cited by 115 (4 self) - Add to MetaCart
We demonstrate a simple greedy algorithm that can reliably recover a vector v ∈ R d from incomplete and inaccurate measurements x = Φv + e. Here Φ is a N × d measurement matrix with N ≪ d, and e is an error vector. Our algorithm, Regularized Orthogonal Matching Pursuit (ROMP), seeks to close the gap between two major approaches to sparse recovery. It combines the speed and ease of implementation of the greedy methods with the strong guarantees of the convex programming methods. For any measurement matrix Φ that satisfies a Uniform Uncertainty Principle, ROMP recovers a signal v with O(n) nonzeros from its inaccurate measurements x in at most n iterations, where each iteration amounts to solving a Least Squares Problem. The noise level of the recovery is proportional to √ log n�e�2. In particular, if the error term e vanishes the reconstruction is exact. This stability result extends naturally to the very accurate recovery of approximately sparse signals.
(Show Context)

Citation Context

...m matrices, such as partial Fourier, Bernoulli and Gaussian, satisfy the Restricted Isometry condition with parameters n ≥ 1, ε ∈ (0, 1/2) provided that N = nε −O(1) log O(1) d; see e.g. Section 2 of =-=[11]-=- and the references therein. Therefore, a computationally tractable exact recovery of sparse signals is possible with the number of measurements N roughly proportional to the sparsity level n, which i...

Signal Processing with Compressive Measurements

by Mark A. Davenport, Petros T. Boufounos, Michael B. Wakin, Richard G. Baraniuk , 2009
"... The recently introduced theory of compressive sensing enables the recovery of sparse or compressible signals from a small set of nonadaptive, linear measurements. If properly chosen, the number of measurements can be much smaller than the number of Nyquist-rate samples. Interestingly, it has been sh ..."
Abstract - Cited by 102 (25 self) - Add to MetaCart
The recently introduced theory of compressive sensing enables the recovery of sparse or compressible signals from a small set of nonadaptive, linear measurements. If properly chosen, the number of measurements can be much smaller than the number of Nyquist-rate samples. Interestingly, it has been shown that random projections are a near-optimal measurement scheme. This has inspired the design of hardware systems that directly implement random measurement protocols. However, despite the intense focus of the community on signal recovery, many (if not most) signal processing problems do not require full signal recovery. In this paper, we take some first steps in the direction of solving inference problems—such as detection, classification, or estimation—and filtering problems using only compressive measurements and without ever reconstructing the signals involved. We provide theoretical bounds along with experimental results.
(Show Context)

Citation Context

...onlinear and relatively expensive optimization-based or iterative algorithms [3]–[5]. Thus, up to this point, most of the CS literature has focused on improving the speed and accuracy of this process =-=[6]-=-–[9]. However, signal recovery is not actually necessary in many signal processing applications. Very often we are only interested in solving an inference problem (extracting certain information from ...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University