Results 1 
7 of
7
Compressed sensing
, 2004
"... We study the notion of Compressed Sensing (CS) as put forward in [14] and related work [20, 3, 4]. The basic idea behind CS is that a signal or image, unknown but supposed to be compressible by a known transform, (eg. wavelet or Fourier), can be subjected to fewer measurements than the nominal numbe ..."
Abstract

Cited by 3625 (22 self)
 Add to MetaCart
(Show Context)
We study the notion of Compressed Sensing (CS) as put forward in [14] and related work [20, 3, 4]. The basic idea behind CS is that a signal or image, unknown but supposed to be compressible by a known transform, (eg. wavelet or Fourier), can be subjected to fewer measurements than the nominal number of pixels, and yet be accurately reconstructed. The samples are nonadaptive and measure ‘random’ linear combinations of the transform coefficients. Approximate reconstruction is obtained by solving for the transform coefficients consistent with measured data and having the smallest possible `1 norm. We perform a series of numerical experiments which validate in general terms the basic idea proposed in [14, 3, 5], in the favorable case where the transform coefficients are sparse in the strong sense that the vast majority are zero. We then consider a range of lessfavorable cases, in which the object has all coefficients nonzero, but the coefficients obey an `p bound, for some p ∈ (0, 1]. These experiments show that the basic inequalities behind the CS method seem to involve reasonable constants. We next consider synthetic examples modelling problems in spectroscopy and image pro
For Most Large Underdetermined Systems of Linear Equations the Minimal ℓ1norm Solution is also the Sparsest Solution
 Comm. Pure Appl. Math
, 2004
"... We consider linear equations y = Φα where y is a given vector in R n, Φ is a given n by m matrix with n < m ≤ An, and we wish to solve for α ∈ R m. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We prove the existence of ρ = ρ(A) so that ..."
Abstract

Cited by 568 (10 self)
 Add to MetaCart
(Show Context)
We consider linear equations y = Φα where y is a given vector in R n, Φ is a given n by m matrix with n < m ≤ An, and we wish to solve for α ∈ R m. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We prove the existence of ρ = ρ(A) so that for large n, and for all Φ’s except a negligible fraction, the following property holds: For every y having a representation y = Φα0 by a coefficient vector α0 ∈ R m with fewer than ρ · n nonzeros, the solution α1 of the ℓ 1 minimization problem min �x�1 subject to Φα = y is unique and equal to α0. In contrast, heuristic attempts to sparsely solve such systems – greedy algorithms and thresholding – perform poorly in this challenging setting. The techniques include the use of random proportional embeddings and almostspherical sections in Banach space theory, and deviation bounds for the eigenvalues of random Wishart matrices.
Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit
, 2006
"... Finding the sparsest solution to underdetermined systems of linear equations y = Φx is NPhard in general. We show here that for systems with ‘typical’/‘random ’ Φ, a good approximation to the sparsest solution is obtained by applying a fixed number of standard operations from linear algebra. Our pr ..."
Abstract

Cited by 274 (22 self)
 Add to MetaCart
(Show Context)
Finding the sparsest solution to underdetermined systems of linear equations y = Φx is NPhard in general. We show here that for systems with ‘typical’/‘random ’ Φ, a good approximation to the sparsest solution is obtained by applying a fixed number of standard operations from linear algebra. Our proposal, Stagewise Orthogonal Matching Pursuit (StOMP), successively transforms the signal into a negligible residual. Starting with initial residual r0 = y, at the sth stage it forms the ‘matched filter ’ Φ T rs−1, identifies all coordinates with amplitudes exceeding a speciallychosen threshold, solves a leastsquares problem using the selected coordinates, and subtracts the leastsquares fit, producing a new residual. After a fixed number of stages (e.g. 10), it stops. In contrast to Orthogonal Matching Pursuit (OMP), many coefficients can enter the model at each stage in StOMP while only one enters per stage in OMP; and StOMP takes a fixed number of stages (e.g. 10), while OMP can take many (e.g. n). StOMP runs much faster than competing proposals for sparse solutions, such as ℓ1 minimization and OMP, and so is attractive for solving largescale problems. We use phase diagrams to compare algorithm performance. The problem of recovering a ksparse vector x0 from (y, Φ) where Φ is random n × N and y = Φx0 is represented by a point (n/N, k/n)
For most large underdetermined systems of equations, the minimal l1norm nearsolution approximates the sparsest nearsolution
 Comm. Pure Appl. Math
, 2004
"... We consider inexact linear equations y ≈ Φα where y is a given vector in R n, Φ is a given n by m matrix, and we wish to find an α0,ɛ which is sparse and gives an approximate solution, obeying �y − Φα0,ɛ�2 ≤ ɛ. In general this requires combinatorial optimization and so is considered intractable. On ..."
Abstract

Cited by 122 (1 self)
 Add to MetaCart
(Show Context)
We consider inexact linear equations y ≈ Φα where y is a given vector in R n, Φ is a given n by m matrix, and we wish to find an α0,ɛ which is sparse and gives an approximate solution, obeying �y − Φα0,ɛ�2 ≤ ɛ. In general this requires combinatorial optimization and so is considered intractable. On the other hand, the ℓ 1 minimization problem min �α�1 subject to �y − Φα�2 ≤ ɛ, is convex, and is considered tractable. We show that for most Φ the solution ˆα1,ɛ = ˆα1,ɛ(y, Φ) of this problem is quite generally a good approximation for ˆα0,ɛ. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We study the underdetermined case where m ∼ An, A> 1 and prove the existence of ρ = ρ(A) and C> 0 so that for large n, and for all Φ’s except a negligible fraction, the following approximate sparse solution property of Φ holds: For every y having an approximation �y − Φα0�2 ≤ ɛ by a coefficient vector α0 ∈ R m with fewer than ρ · n nonzeros, we have �ˆα1,ɛ − α0�2 ≤ C · ɛ. This has two implications. First: for most Φ, whenever the combinatorial optimization result α0,ɛ would be very sparse, ˆα1,ɛ is a good approximation to α0,ɛ. Second: suppose we are given noisy data obeying y = Φα0 + z where the unknown α0 is known to be sparse and the noise �z�2 ≤ ɛ. For most Φ, noisetolerant ℓ 1minimization will stably recover α0 from y in the presence of noise z. We study also the barelydetermined case m = n and reach parallel conclusions by slightly different arguments. The techniques include the use of almostspherical sections in Banach space theory and concentration of measure for eigenvalues of random matrices.
INVERSE PROBLEMS doi:10.1088/02665611/23/3/008
, 2007
"... We consider the problem of reconstructing a sparse signal x0 ∈ R n from a limited number of linear measurements. Given m randomly selected samples of Ux0, where U is an orthonormal matrix, we show that ℓ1 minimization recovers x0 exactly when the number of measurements exceeds m � const · µ 2 (U) · ..."
Abstract
 Add to MetaCart
(Show Context)
We consider the problem of reconstructing a sparse signal x0 ∈ R n from a limited number of linear measurements. Given m randomly selected samples of Ux0, where U is an orthonormal matrix, we show that ℓ1 minimization recovers x0 exactly when the number of measurements exceeds m � const · µ 2 (U) · S · log n, where S is the number of nonzero components in x0 and µ is the largest entry in U properly normalized: µ(U) = √ n · maxk,j Uk,j. The smaller µ is, the fewer samples needed. The result holds for ‘most ’ sparse signals x0 supported on a fixed (but arbitrary) set T. GivenT, if the sign of x0 for each nonzero entry on T and the observed values of Ux0 are drawn at random, the signal is recovered with overwhelming probability. Moreover, there is a sense in which this is nearly optimal since any method succeeding with the same probability would require just about as many samples. 1.
Home Search Collections Journals About Contact us My IOPscience IOP PUBLISHING INVERSE PROBLEMS
, 2013
"... Multipenalty regularization with a componentwise penalization This article has been downloaded from IOPscience. Please scroll down to see the full text article. ..."
Abstract
 Add to MetaCart
(Show Context)
Multipenalty regularization with a componentwise penalization This article has been downloaded from IOPscience. Please scroll down to see the full text article.
University of Alberta FAST GRADIENT ALGORITHMS FOR STRUCTURED SPARSITY
"... and to lend or sell such copies for private, scholarly or scientific research purposes only. Where the thesis is converted to, or otherwise made available in digital form, the University of Alberta will advise potential users of the thesis of these terms. The author reserves all other publication an ..."
Abstract
 Add to MetaCart
and to lend or sell such copies for private, scholarly or scientific research purposes only. Where the thesis is converted to, or otherwise made available in digital form, the University of Alberta will advise potential users of the thesis of these terms. The author reserves all other publication and other rights in association with the copyright in the thesis, and except as herein before provided, neither the thesis nor any substantial portion thereof may be printed or otherwise reproduced in any material form whatever without the author’s prior written permission. To my grandpa. Many machine learning problems can be formulated under the composite minimization framework which usually involves a smooth loss function and a nonsmooth regularizer. A lot of algorithms have thus been proposed and the main focus has been on first order gradient methods, due to their applicability in very large scale application domains. A common requirement of many of these popular gradient algorithms is the access to the proximal map of the regularizer, which unfortunately may not be easily computable in scenarios such as structured sparsity. In this thesis we first identify conditions under which the proximal map of a sum of functions is simply the composition of the proximal