Results 1 - 10
of
270
Robust Uncertainty Principles: Exact Signal Reconstruction From Highly Incomplete Frequency Information
, 2006
"... This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal and a randomly chosen set of frequencies. Is it possible to reconstruct from the partial knowledge of its Fourier coefficients on the set? A typical result of this pa ..."
Abstract
-
Cited by 2632 (50 self)
- Add to MetaCart
This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal and a randomly chosen set of frequencies. Is it possible to reconstruct from the partial knowledge of its Fourier coefficients on the set? A typical result of this paper is as follows. Suppose that is a superposition of spikes @ Aa @ A @ A obeying @�� � A I for some constant H. We do not know the locations of the spikes nor their amplitudes. Then with probability at least I @ A, can be reconstructed exactly as the solution to the I minimization problem I aH @ A s.t. ” @ Aa ” @ A for all
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract
-
Cited by 1513 (20 self)
- Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a power-law (or if the coefficient sequence of f in a fixed basis decays like a power-law), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude |f | (1) ≥ |f | (2) ≥... ≥ |f | (N), and define the weak-ℓp ball as the class F of those elements whose entries obey the power decay law |f | (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are N-dimensional Gaussian
Compressive sampling
, 2006
"... Conventional wisdom and common practice in acquisition and reconstruction of images from frequency data follow the basic principle of the Nyquist density sampling theory. This principle states that to reconstruct an image, the number of Fourier samples we need to acquire must match the desired res ..."
Abstract
-
Cited by 1441 (15 self)
- Add to MetaCart
Conventional wisdom and common practice in acquisition and reconstruction of images from frequency data follow the basic principle of the Nyquist density sampling theory. This principle states that to reconstruct an image, the number of Fourier samples we need to acquire must match the desired resolution of the image, i.e. the number of pixels in the image. This paper surveys an emerging theory which goes by the name of “compressive sampling” or “compressed sensing,” and which says that this conventional wisdom is inaccurate. Perhaps surprisingly, it is possible to reconstruct images or signals of scientific interest accurately and sometimes even exactly from a number of samples which is far smaller than the desired resolution of the image/signal, e.g. the number of pixels in the image. It is believed that compressive sampling has far reaching implications. For example, it suggests the possibility of new data acquisition protocols that translate analog information into digital form with fewer sensors than what was considered necessary. This new sampling theory may come to underlie procedures for sampling and compressing data simultaneously. In this short survey, we provide some of the key mathematical insights underlying this new theory, and explain some of the interactions between compressive sampling and other fields such as statistics, information theory, coding theory, and theoretical computer science.
CoSaMP: Iterative signal recovery from incomplete and inaccurate samples
- California Institute of Technology, Pasadena
, 2008
"... Abstract. Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthonormal basis. The major algorithmic challenge in compressive sampling is to approximate a compressible signal from noisy samples. This paper describes a new iterative recovery alg ..."
Abstract
-
Cited by 770 (13 self)
- Add to MetaCart
Abstract. Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect to an orthonormal basis. The major algorithmic challenge in compressive sampling is to approximate a compressible signal from noisy samples. This paper describes a new iterative recovery algorithm called CoSaMP that delivers the same guarantees as the best optimization-based approaches. Moreover, this algorithm offers rigorous bounds on computational cost and storage. It is likely to be extremely efficient for practical problems because it requires only matrix–vector multiplies with the sampling matrix. For compressible signals, the running time is just O(N log 2 N), where N is the length of the signal. 1.
Optimally sparse representation in general (non-orthogonal) dictionaries via ℓ¹ minimization
- PROC. NATL ACAD. SCI. USA 100 2197–202
, 2002
"... Given a ‘dictionary’ D = {dk} of vectors dk, we seek to represent a signal S as a linear combination S = ∑ k γ(k)dk, with scalar coefficients γ(k). In particular, we aim for the sparsest representation possible. In general, this requires a combinatorial optimization process. Previous work considered ..."
Abstract
-
Cited by 633 (38 self)
- Add to MetaCart
Given a ‘dictionary’ D = {dk} of vectors dk, we seek to represent a signal S as a linear combination S = ∑ k γ(k)dk, with scalar coefficients γ(k). In particular, we aim for the sparsest representation possible. In general, this requires a combinatorial optimization process. Previous work considered the special case where D is an overcomplete system consisting of exactly two orthobases, and has shown that, under a condition of mutual incoherence of the two bases, and assuming that S has a sufficiently sparse representation, this representation is unique and can be found by solving a convex optimization problem: specifically, minimizing the ℓ¹ norm of the coefficients γ. In this paper, we obtain parallel results in a more general setting, where the dictionary D can arise from two or several bases, frames, or even less structured systems. We introduce the Spark, ameasure of linear dependence in such a system; it is the size of the smallest linearly dependent subset (dk). We show that, when the signal S has a representation using less than Spark(D)/2 nonzeros, this representation is necessarily unique. We
Uncertainty principles and ideal atomic decomposition
- IEEE Transactions on Information Theory
, 2001
"... Suppose a discrete-time signal S(t), 0 t<N, is a superposition of atoms taken from a combined time/frequency dictionary made of spike sequences 1ft = g and sinusoids expf2 iwt=N) = p N. Can one recover, from knowledge of S alone, the precise collection of atoms going to make up S? Because every d ..."
Abstract
-
Cited by 583 (20 self)
- Add to MetaCart
(Show Context)
Suppose a discrete-time signal S(t), 0 t<N, is a superposition of atoms taken from a combined time/frequency dictionary made of spike sequences 1ft = g and sinusoids expf2 iwt=N) = p N. Can one recover, from knowledge of S alone, the precise collection of atoms going to make up S? Because every discrete-time signal can be represented as a superposition of spikes alone, or as a superposition of sinusoids alone, there is no unique way of writing S as a sum of spikes and sinusoids in general. We prove that if S is representable as a highly sparse superposition of atoms from this time/frequency dictionary, then there is only one such highly sparse representation of S, and it can be obtained by solving the convex optimization problem of minimizing the `1 norm of the coe cients among all decompositions. Here \highly sparse " means that Nt + Nw < p N=2 where Nt is the number of time atoms, Nw is the number of frequency atoms, and N is the length of the discrete-time signal.
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
, 2007
"... A full-rank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combin ..."
Abstract
-
Cited by 427 (36 self)
- Add to MetaCart
A full-rank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combinatorial in nature, are there efficient methods for finding the sparsest solution? These questions have been answered positively and constructively in recent years, exposing a wide variety of surprising phenomena; in particular, the existence of easily-verifiable conditions under which optimally-sparse solutions can be found by concrete, effective computational methods. Such theoretical results inspire a bold perspective on some important practical problems in signal and image processing. Several well-known signal and image processing problems can be cast as demanding solutions of undetermined systems of equations. Such problems have previously seemed, to many, intractable. There is considerable evidence that these problems often have sparse solutions. Hence, advances in finding sparse solutions to underdetermined systems energizes research on such signal and image processing problems – to striking effect. In this paper we review the theoretical results on sparse solutions of linear systems, empirical
On sparse reconstruction from Fourier and Gaussian measurements
- Communications on Pure and Applied Mathematics
, 2006
"... Abstract. This paper improves upon best known guarantees for exact reconstruction of a sparse signal f from a small universal sample of Fourier measurements. The method for reconstruction that has recently gained momentum in the Sparse Approximation Theory is to relax this highly non-convex problem ..."
Abstract
-
Cited by 262 (8 self)
- Add to MetaCart
(Show Context)
Abstract. This paper improves upon best known guarantees for exact reconstruction of a sparse signal f from a small universal sample of Fourier measurements. The method for reconstruction that has recently gained momentum in the Sparse Approximation Theory is to relax this highly non-convex problem to a convex problem, and then solve it as a linear program. We show that there exists a set of frequencies Ω such that one can exactly reconstruct every r-sparse signal f of length n from its frequencies in Ω, using the convex relaxation, and Ω has size k(r, n) = O(r log(n)·log 2 (r) log(r log n)) = O(r log 4 n). A random set Ω satisfies this with high probability. This estimate is optimal within the log log n and log 3 r factors. We also give a relatively short argument for a similar problem with k(r, n) � r[12 + 8 log(n/r)] Gaussian measurements. We use methods of geometric functional analysis and probability theory in Banach spaces, which makes our arguments quite short. 1.
A generalized uncertainty principle and sparse representation in pairs of bases
- IEEE Trans. Inform. Theory
, 2002
"... An elementary proof of a basic uncertainty principle concerning pairs of representations of ℜ N vectors in different orthonormal bases is provided. The result, slightly stronger than stated before, has a direct impact on the uniqueness property of the sparse representation of such vectors using pair ..."
Abstract
-
Cited by 247 (15 self)
- Add to MetaCart
An elementary proof of a basic uncertainty principle concerning pairs of representations of ℜ N vectors in different orthonormal bases is provided. The result, slightly stronger than stated before, has a direct impact on the uniqueness property of the sparse representation of such vectors using pairs of orthonormal bases as overcomplete dictionaries. The main contribution in this paper is the improvement of an important result due to Donoho and Huo concerning the replacement of the l0 optimization problem by a linear programming minimization when searching for the unique sparse representation. 1
Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit
, 2007
"... This paper seeks to bridge the two major algorithmic approaches to sparse signal recovery from an incomplete set of linear measurements – L1-minimization methods and iterative methods (Matching Pursuits). We find a simple regularized version of Orthogonal Matching Pursuit (ROMP) which has advantage ..."
Abstract
-
Cited by 188 (8 self)
- Add to MetaCart
This paper seeks to bridge the two major algorithmic approaches to sparse signal recovery from an incomplete set of linear measurements – L1-minimization methods and iterative methods (Matching Pursuits). We find a simple regularized version of Orthogonal Matching Pursuit (ROMP) which has advantages of both approaches: the speed and transparency of OMP and the strong uniform guarantees of L1-minimization. Our algorithm ROMP reconstructs a sparse signal in a number of iterations linear in the sparsity, and the reconstruction is exact provided the linear measurements satisfy the Uniform Uncertainty Principle.