Results 1  10
of
52
Compressed sensing
 IEEE Trans. Inf. Theory
, 2006
"... We study the notion of Compressed Sensing (CS) as put forward in [14] and related work [20, 3, 4]. The basic idea behind CS is that a signal or image, unknown but supposed to be compressible by a known transform, (eg. wavelet or Fourier), can be subjected to fewer measurements than the nominal numbe ..."
Abstract

Cited by 3600 (24 self)
 Add to MetaCart
(Show Context)
We study the notion of Compressed Sensing (CS) as put forward in [14] and related work [20, 3, 4]. The basic idea behind CS is that a signal or image, unknown but supposed to be compressible by a known transform, (eg. wavelet or Fourier), can be subjected to fewer measurements than the nominal number of pixels, and yet be accurately reconstructed. The samples are nonadaptive and measure ‘random ’ linear combinations of the transform coefficients. Approximate reconstruction is obtained by solving for the transform coefficients consistent with measured data and having the smallest possible `1 norm. We perform a series of numerical experiments which validate in general terms the basic idea proposed in [14, 3, 5], in the favorable case where the transform coefficients are sparse in the strong sense that the vast majority are zero. We then consider a range of lessfavorable cases, in which the object has all coefficients nonzero, but the coefficients obey an `p bound, for some p ∈ (0, 1]. These experiments show that the basic inequalities behind the CS method seem to involve reasonable constants. We next consider synthetic examples modelling problems in spectroscopy and image pro
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
, 2007
"... A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combin ..."
Abstract

Cited by 423 (37 self)
 Add to MetaCart
(Show Context)
A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combinatorial in nature, are there efficient methods for finding the sparsest solution? These questions have been answered positively and constructively in recent years, exposing a wide variety of surprising phenomena; in particular, the existence of easilyverifiable conditions under which optimallysparse solutions can be found by concrete, effective computational methods. Such theoretical results inspire a bold perspective on some important practical problems in signal and image processing. Several wellknown signal and image processing problems can be cast as demanding solutions of undetermined systems of equations. Such problems have previously seemed, to many, intractable. There is considerable evidence that these problems often have sparse solutions. Hence, advances in finding sparse solutions to underdetermined systems energizes research on such signal and image processing problems – to striking effect. In this paper we review the theoretical results on sparse solutions of linear systems, empirical
For most large underdetermined systems of equations, the minimal l1norm nearsolution approximates the sparsest nearsolution
 Comm. Pure Appl. Math
, 2004
"... We consider inexact linear equations y ≈ Φα where y is a given vector in R n, Φ is a given n by m matrix, and we wish to find an α0,ɛ which is sparse and gives an approximate solution, obeying �y − Φα0,ɛ�2 ≤ ɛ. In general this requires combinatorial optimization and so is considered intractable. On ..."
Abstract

Cited by 121 (1 self)
 Add to MetaCart
(Show Context)
We consider inexact linear equations y ≈ Φα where y is a given vector in R n, Φ is a given n by m matrix, and we wish to find an α0,ɛ which is sparse and gives an approximate solution, obeying �y − Φα0,ɛ�2 ≤ ɛ. In general this requires combinatorial optimization and so is considered intractable. On the other hand, the ℓ 1 minimization problem min �α�1 subject to �y − Φα�2 ≤ ɛ, is convex, and is considered tractable. We show that for most Φ the solution ˆα1,ɛ = ˆα1,ɛ(y, Φ) of this problem is quite generally a good approximation for ˆα0,ɛ. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We study the underdetermined case where m ∼ An, A> 1 and prove the existence of ρ = ρ(A) and C> 0 so that for large n, and for all Φ’s except a negligible fraction, the following approximate sparse solution property of Φ holds: For every y having an approximation �y − Φα0�2 ≤ ɛ by a coefficient vector α0 ∈ R m with fewer than ρ · n nonzeros, we have �ˆα1,ɛ − α0�2 ≤ C · ɛ. This has two implications. First: for most Φ, whenever the combinatorial optimization result α0,ɛ would be very sparse, ˆα1,ɛ is a good approximation to α0,ɛ. Second: suppose we are given noisy data obeying y = Φα0 + z where the unknown α0 is known to be sparse and the noise �z�2 ≤ ɛ. For most Φ, noisetolerant ℓ 1minimization will stably recover α0 from y in the presence of noise z. We study also the barelydetermined case m = n and reach parallel conclusions by slightly different arguments. The techniques include the use of almostspherical sections in Banach space theory and concentration of measure for eigenvalues of random matrices.
Reconstruction and subgaussian operators in Asymptotic Geometric Analysis
 FUNCT. ANAL
"... We present a randomized method to approximate any vector v from some set T ⊂ R n. The data one is given is the set T, vectors (Xi) k i=1 of R n and k scalar products (〈Xi, v〉) k i=1, where (Xi) k i=1 are i.i.d. isotropic subgaussian random vectors in R n, and k ≪ n. We show that with high probabilit ..."
Abstract

Cited by 76 (14 self)
 Add to MetaCart
We present a randomized method to approximate any vector v from some set T ⊂ R n. The data one is given is the set T, vectors (Xi) k i=1 of R n and k scalar products (〈Xi, v〉) k i=1, where (Xi) k i=1 are i.i.d. isotropic subgaussian random vectors in R n, and k ≪ n. We show that with high probability, any y ∈ T for which (〈Xi, y〉) k i=1 is close to the data vector (〈Xi, v〉) k i=1 will be a good approximation of v, and that the degree of approximation is determined by a natural geometric parameter associated with the set T. We also investigate a random method to identify exactly any vector which has a relatively short support using linear subgaussian measurements as above. It turns out that our analysis, when applied to {−1, 1}valued vectors with i.i.d, symmetric entries, yields new information on the geometry of faces of random {−1, 1}polytope; we show that a kdimensional random {−1, 1}polytope with n vertices is mneighborly for very large m ≤ ck / log(c ′ n/k). The proofs are � based on new estimates on the behavior of the empirical process supf∈F �k−1 �k i=1 f 2 (Xi) − Ef 2 � when F is a subset of the L2 sphere. The estimates are given in terms of the γ2 functional with respect to the ψ2 metric on F, and hold both in exponential probability and in expectation.
Fast solution of ℓ1norm minimization problems when the solution may be sparse
, 2006
"... The minimum ℓ1norm solution to an underdetermined system of linear equations y = Ax, is often, remarkably, also the sparsest solution to that system. This sparsityseeking property is of interest in signal processing and information transmission. However, generalpurpose optimizers are much too slo ..."
Abstract

Cited by 54 (1 self)
 Add to MetaCart
(Show Context)
The minimum ℓ1norm solution to an underdetermined system of linear equations y = Ax, is often, remarkably, also the sparsest solution to that system. This sparsityseeking property is of interest in signal processing and information transmission. However, generalpurpose optimizers are much too slow for ℓ1 minimization in many largescale applications. The Homotopy method was originally proposed by Osborne et al. for solving noisy overdetermined ℓ1penalized least squares problems. We here apply it to solve the noiseless underdetermined ℓ1minimization problem min ‖x‖1 subject to y = Ax. We show that Homotopy runs much more rapidly than generalpurpose LP solvers when sufficient sparsity is present. Indeed, the method often has the following kstep solution property: if the underlying solution has only k nonzeros, the Homotopy method reaches that solution in only k iterative steps. When this property holds and k is small compared to the problem size, this means that ℓ1 minimization problems with ksparse solutions can be solved in a fraction of the cost of solving one fullsized linear system. We demonstrate this kstep solution property for two kinds of problem suites. First,
LowDimensional Models for Dimensionality Reduction and Signal Recovery: A Geometric Perspective
, 2009
"... We compare and contrast from a geometric perspective a number of lowdimensional signal models that support stable informationpreserving dimensionality reduction. We consider sparse and compressible signal models for deterministic and random signals, structured sparse and compressible signal model ..."
Abstract

Cited by 47 (12 self)
 Add to MetaCart
We compare and contrast from a geometric perspective a number of lowdimensional signal models that support stable informationpreserving dimensionality reduction. We consider sparse and compressible signal models for deterministic and random signals, structured sparse and compressible signal models, point clouds, and manifold signal models. Each model has a particular geometrical structure that enables signal information in to be stably preserved via a simple linear and nonadaptive projection to a much lower dimensional space whose dimension either is independent of the ambient dimension at best or grows logarithmically with it at worst. As a bonus, we point out a common misconception related to probabilistic compressible signal models, that is, that the generalized Gaussian and Laplacian random models do not support stable linear dimensionality reduction.
Sparse signal reconstruction via iterative support detection
 Siam Journal on Imaging Sciences, issue
, 2010
"... Abstract. We present a novel sparse signal reconstruction method, iterative support detection (ISD), aiming to achieve fast reconstruction and a reduced requirement on the number of measurements compared to the classical ℓ1 minimization approach. ISD addresses failed reconstructions of ℓ1 minimizati ..."
Abstract

Cited by 36 (5 self)
 Add to MetaCart
(Show Context)
Abstract. We present a novel sparse signal reconstruction method, iterative support detection (ISD), aiming to achieve fast reconstruction and a reduced requirement on the number of measurements compared to the classical ℓ1 minimization approach. ISD addresses failed reconstructions of ℓ1 minimization due to insufficient measurements. It estimates a support set I from a current reconstruction and obtains a new reconstruction by solving the minimization problem min { ∑ i/∈I xi  : Ax = b}, and it iterates these two steps for a small number of times. ISD differs from the orthogonal matching pursuit method, as well as its variants, because (i) the index set I in ISD is not necessarily nested or increasing, and (ii) the minimization problem above updates all the components of x at the same time. We generalize the null space property to the truncated null space property and present our analysis of ISD based on the latter. We introduce an efficient implementation of ISD, called thresholdISD, for recovering signals with fast decaying distributions of nonzeros from compressive sensing measurements. Numerical experiments show that thresholdISD has significant advantages over the classical ℓ1 minimization approach, as well as two stateoftheart algorithms: the iterative reweighted ℓ1 minimization algorithm (IRL1) and the iterative reweighted leastsquares algorithm (IRLS). MATLAB code is available for download from
A remark on compressed sensing
, 2007
"... Abstract—Recently, a new direction in signal processing – “Compressed Sensing " is being actively developed. A number of authors have pointed out a connection between the Compressed Sensing problem and the problem of estimating the Kolmogorov widths, studied in the seventies and eighties of the ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Recently, a new direction in signal processing – “Compressed Sensing " is being actively developed. A number of authors have pointed out a connection between the Compressed Sensing problem and the problem of estimating the Kolmogorov widths, studied in the seventies and eighties of the last century. In this paper we make the above mentioned connection more precise. DOI: 10.1134/S0001434607110193
EXPLICIT CONSTRUCTIONS OF RIP MATRICES AND RELATED PROBLEMS
"... Abstract. We give a new explicit construction of n × N matrices satisfying the Restricted Isometry Property (RIP). Namely, for some ε> 0, large N and any n satisfying N 1−ε ≤ n ≤ N, we construct RIP matrices of order k 1/2+ε and constant δ −ε. This overcomes the natural barrier k = O(n 1/2) for p ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We give a new explicit construction of n × N matrices satisfying the Restricted Isometry Property (RIP). Namely, for some ε> 0, large N and any n satisfying N 1−ε ≤ n ≤ N, we construct RIP matrices of order k 1/2+ε and constant δ −ε. This overcomes the natural barrier k = O(n 1/2) for proofs based on small coherence, which are used in all previous explicit constructions of RIP matrices. Key ingredients in our proof are new estimates for sumsets in product sets and for exponential sums with the products of sets possessing special additive structure. We also give a construction of sets of n complex numbers whose kth moments are uniformly small for 1 ≤ k ≤ N (Turán’s power sum problem), which improves upon known explicit constructions when (log N) 1+o(1) ≤ n ≤ (log N) 4+o(1). This latter construction produces elementary explicit examples of n × N matrices that satisfy RIP and whose columns constitute a new spherical code; for those problems the parameters closely match those of existing constructions in the range (log N) 1+o(1) ≤ n ≤ (log N) 5/2+o(1). 1.
Dense error correction via ℓ1 minimization
, 2009
"... This paper studies the problem of recovering a nonnegative sparse signal x ∈ Rn from highly corrupted linear measurements y = Ax + e ∈ Rm, where e is an unknown error vector whose nonzero entries may be unbounded. Motivated by an observation from face recognition in computer vision, this paper prov ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
This paper studies the problem of recovering a nonnegative sparse signal x ∈ Rn from highly corrupted linear measurements y = Ax + e ∈ Rm, where e is an unknown error vector whose nonzero entries may be unbounded. Motivated by an observation from face recognition in computer vision, this paper proves that for highly correlated (and possibly overcomplete) dictionaries A, any nonnegative, sufficiently sparse signal x can be recovered by solving an ℓ1minimization problem: min ‖x‖1 + ‖e‖1 subject to y = Ax + e. More precisely, if the fraction ρ of errors is bounded away from one and the support of x grows sublinearly in the dimension m of the observation, then as m goes to infinity, the above ℓ1minimization succeeds for all signals x and almost all signandsupport patterns of e. This result suggests that accurate recovery of sparse signals is possible and computationally feasible even with nearly 100 % of the observations corrupted. The proof relies on a careful characterization of the faces of a convex polytope spanned together by the standard crosspolytope and a set of iid Gaussian vectors with nonzero mean and small variance, which we call the “crossandbouquet ” model. Simulations and experimental results corroborate the findings, and suggest extensions to the result.