#### DMCA

## Compressive Sensing for Sparse Approximations: Constructions, Algorithms, and Analysis (2010)

Citations: | 3 - 0 self |

### Citations

7696 | Matrix Analysis - Horn, Johnson - 1985 |

3599 | Compressed Sensing
- Donoho
- 2006
(Show Context)
Citation Context ... Compressive sensing, also referred to as compressed sensing or compressive sampling, is an emerging area in signal processing and information theory which has attracted a lotofattentionrecently[CT06]=-=[Don06b]-=-. Themotivationbehindcompressive sensing is to do “sampling” and “compression” at the same time. In conventional wisdom, in order to fully recover a signal, one has to sample the signal at a sampling ... |

2710 | Atomic decomposition by basis pursuit
- Chen, Donoho, et al.
- 1998
(Show Context)
Citation Context ...ctions from bandlimited data. With the LASSO algorithm [Tib96] proposed as a method in statistics for sparse model selection, the application areas for ℓ1 minimization began to broaden. Basis Pursuit =-=[CDS98]-=- was proposed in computational harmonic analysis for extracting a sparse signal representation from highly overcomplete dictionaries, and a related technique known as total variation minimization was ... |

2608 | Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information - Candès, Romberg, et al. |

1665 | Matching pursuits with time-frequency dictionaries - Mallat, Zhang - 1993 |

1502 | Near-optimal signal recovery from random projections: Universal encoding strategies?”
- Candes, Tao
- 2006
(Show Context)
Citation Context ...O(n3 ), where n is the number of unknowns), this may still be infeasible in applications where n is quite large (e.g., in current digital cameras the number of pixels is of the order n = 106 or more) =-=[CR]-=-. Therefore there is a need for methods and algorithms that are more computationally efficient. Also, in many of19 the previous works, random measurement matrices are used where a successful signal r... |

1432 | Compressive sampling
- Candès
- 2006
(Show Context)
Citation Context .... Gene expression studies also provide examples of compressive sensing. For example, one would like to infer the gene expression level of thousands of genes from only a limited number of observations =-=[Can06]-=-. 1.3.4 Error Correcting Compressive sensing also has impacts on the coding theory and practices and can be seen as a dual to the error correction problem over the real number field. Error correction ... |

867 | Exact matrix completion via convex optimization.
- Candes, Recht
- 2009
(Show Context)
Citation Context ...al paper initiated a groundswell of research, and, subsequently, Candès and Recht showed that the nuclear norm heuristic could be used to recover low-rank matrices from a sparse collection of entries =-=[CR09]-=-, Ames and Vavasis have used similar techniques to provide average case analysis of NPHARD combinatorial optimization problems [AV09], and Vandenberghe and Zhang have proposed novel algorithms for ide... |

763 | CoSaMP: Iterative signal recovery from incomplete and inaccurate samples,”
- Needell, Tropp
- 2009
(Show Context)
Citation Context ...ursuit [MZ93, TG], Stagewise Orthogonal Matching Pursuit [DTDS07], Regularized Orthogonal Matching Pursuit [NV09], Iterative Thresholding [FR07, BD] and Compressive Sampling Matching Pursuit (CoSaMP) =-=[NT08]-=-. Most of these approaches calculate the support of thesignal iteratively. Withthe support S of the signal calculated, the signal x is reconstructed from its measurements y = Ax as x = (AS)† y, where ... |

625 | A simple proof of the Restricted Isometry Property for random matrices - Baraniuk, Davenport, et al. |

566 |
An Introduction to Differentiable Manifolds and Riemannmn Geometry
- Boothby
- 1975
(Show Context)
Citation Context ...random (n−m)-dimensional subspace from the Grassmann manifold Gr(n−m)(n). Here the Grassmannian manifold Gr(n−m)(n) is the set of (n − m)-dimensional subspaces in the n-dimensional Euclidean space Rn =-=[Boo86]-=-. For any such A and ideally sparse signals, the sharp bounds of [Don06c], for example, apply. However, we shall see that the neighborly polytope condition for ideally sparse signals does not apply to... |

566 | For most large underdetermined systems of linear equations, the minimal ell-1 norm solution is also the sparsest solution
- Donoho
- 2006
(Show Context)
Citation Context ...asis Bursuit” [Che95, CDS98]. Interestingly, the area of compressive sensing is closely connected to the related areas of coding [CT05], high-dimensional geometry [DT05a], sparse approximation theory =-=[Don06a]-=-, data streaming algorithms [CM06, GSTV05] and random sampling [GGI+ 02]. Furthermore, promising applications of compressive sensing are emerging in compressive imaging, medical imaging, sensor networ... |

554 | A singular value thresholding algorithm for matrix completion. - Cai, Candes, et al. - 2010 |

535 | Sparse MRI: The application of compressed sensing for rapid mr imaging,”
- Lustig, Donoho, et al.
- 2007
(Show Context)
Citation Context ...RI is a very active topic in compressive sensing and has attracted a large number of researchers in this field. Many compressive sensing algorithms have been specifically designed for MRI application =-=[LDP07]-=-. 1.3.2 Radar Signal Processing Atraditionalradarsystemtransmitssomekindofpulseform,andthenusesamatched filter to correlate the signal received with that pulse. The receiver then uses a pulse compress... |

522 | The geometry of graphs and some of its algorithmic applications
- Linial, London, et al.
- 1995
(Show Context)
Citation Context ...ry [FHB01], and model reduction [BD98] can be formulated as rank minimization problems. Rank minimization also plays a key role in the study of embeddings of discrete metric spaces in Euclidean space =-=[LLR95]-=- and of learning structure in data and manifold learning [WS06]. In certain instances with special structure, the rank minimization problem can be solved via the singular value decomposition or can be... |

453 | Spectral analysis of large dimensional random matrices - Bai, Silverstein - 2006 |

427 |
Probability in Banach Spaces
- Ledoux, Talagrand
- 1991
(Show Context)
Citation Context ...rices. We provide a sufficient statistic that guarantees the heuristic succeeds, and then use comparison lemmas for Gaussian processes to bound the expected value of this heuristic (see, for example, =-=[LT91]-=-). We then show that this random variable is sharply concentrated around its expectation.188 6.1.3 Notation and Preliminaries ForarectangularmatrixX ∈ R n1×n2 , X ∗ denotesthetransposeofX. vec(X)deno... |

362 |
Distributions of eigenvalues for some sets of random matrices
- Marchenko, Pastur
- 1967
(Show Context)
Citation Context ... −1 ‖x ¯ K‖1 + (3C +1)√nǫ . (3.9.4) (C −1)σmin If the elements in the measurement matrix A are i.i.d. as the unit real Gaussian random variable N(0,1), following upon the work of Marchenko and Pastur =-=[MP67]-=-, Geman [Gem80] and Silverstein [Sil85] proved that for m/n = δ, as n → ∞, 1 √ n σmin → 1− √ δ almost surely. Then almost surely as n → ∞, (3C+1)√ nǫ (C−1)σmin ‖x ∗ −x‖1 is upperbounded by some consta... |

358 | Expander graphs and their applications - Hoory, Linial, et al. |

329 | Bayesian compressive sensing
- Ji, Xue, et al.
- 2008
(Show Context)
Citation Context ...tempts to address) is a detection or estimation problem in some statistical setting. Some recent work along these lines can be found in [MDB] (which considers compressed detection and estimation) and =-=[JXC08]-=- (on Bayesian compressed sensing). In other cases, compressed sensing may be the inner loop of a larger estimation problem that feeds prior information on the sparse signal (e.g., its sparsity pattern... |

324 | Iterative hard thresholding for compressed sensing,” - Blumensath, Davies - 2009 |

286 |
Matrix Rank Minimization with Applications
- Fazel
- 2001
(Show Context)
Citation Context ...and such solution methods require at least185 exponential time in the dimensions of the matrix variables. Nuclear norm minimization is a recent heuristic for rank minimization introduced by Fazel in =-=[Faz02]-=-. When the matrix variable is symmetric and positive semidefinite, this heuristic is equivalent to the “trace heuristic” from control theory (see, e.g., [BD98, MP97]). Both the trace heuristic and the... |

278 | Compressed sensing and best k-term approximation,”
- Cohen, Dahmen, et al.
- 2009
(Show Context)
Citation Context ...y k-sparse signal, or for brevity only approximately sparse signal. Possibly the y can be further corrupted with measurement noise. The interested readers can find More on similar type of problems in =-=[CDD08]-=- and other references. This problem setup is more realistic of practical applications than the standard compressed sensing of ideally k-sparse signals (see, e.g., [TWD+ 06, Can06, CRT06] and the refer... |

274 | A rank minimization heuristic with application to minimum order system approximation.
- Fazel, Hindi, et al.
- 2001
(Show Context)
Citation Context ... problems arise in the context of inference with partial information [RS05] and Multi-task learning [AMP08]. In control theory, problems in controller design [EGG93, MP97], minimal realization theory =-=[FHB01]-=-, and model reduction [BD98] can be formulated as rank minimization problems. Rank minimization also plays a key role in the study of embeddings of discrete metric spaces in Euclidean space [LLR95] an... |

273 | Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit,”
- Donoho, Tsaig, et al.
- 2012
(Show Context)
Citation Context ...ge dimension processing, these approaches are not optimally fast. The other main approaches use greedy algorithms such as Orthogonal Matching Pursuit [MZ93, TG], Stagewise Orthogonal Matching Pursuit =-=[DTDS07]-=-, Regularized Orthogonal Matching Pursuit [NV09], Iterative Thresholding [FR07, BD] and Compressive Sampling Matching Pursuit (CoSaMP) [NT08]. Most of these approaches calculate the support of thesign... |

258 | Convex multi-task feature learning.
- Argyriou, Evgeniou, et al.
- 2008
(Show Context)
Citation Context ... rank of matrices are pervasive in engineering applications. For example, in Machine Learning, these problems arise in the context of inference with partial information [RS05] and Multi-task learning =-=[AMP08]-=-. In control theory, problems in controller design [EGG93, MP97], minimal realization theory [FHB01], and model reduction [BD98] can be formulated as rank minimization problems. Rank minimization also... |

246 | Fast maximum margin matrix factorization for collaborative prediction.
- Rennie, Srebro
- 2005
(Show Context)
Citation Context ...ms involving constraints on the rank of matrices are pervasive in engineering applications. For example, in Machine Learning, these problems arise in the context of inference with partial information =-=[RS05]-=- and Multi-task learning [AMP08]. In control theory, problems in controller design [EGG93, MP97], minimal realization theory [FHB01], and model reduction [BD98] can be formulated as rank minimization ... |

237 | Signal reconstruction from noisy random projections - Haupt, Nowak - 2006 |

204 | Short proofs are Narrow–Resolution made Simple,
- Ben-Sasson, Wigderson
- 2001
(Show Context)
Citation Context ...probabilistic arguments to prove their existence. Expander graphs arise in many applications: fast, distributed routing algorithms [PU89], LDPC codes [SS96], storage schemes [UW87], and proof systems =-=[BSW99]-=-, to name a few. An explicit construction of constant regular left degree lossless (with β arbitrarily close to 1) expander graph is recently given in25 [CRVW02]. An existence result, which holds for... |

196 | Fixed point and Bregman iterative methods for matrix rank minimization - Ma, Goldfarb, et al. - 2011 |

195 | Sparse nonnegative solution of underdetermined linear equations by linear programming - Donoho, Tanner |

188 | Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit,”
- Needell, Vershynin
- 2009
(Show Context)
Citation Context ...optimally fast. The other main approaches use greedy algorithms such as Orthogonal Matching Pursuit [MZ93, TG], Stagewise Orthogonal Matching Pursuit [DTDS07], Regularized Orthogonal Matching Pursuit =-=[NV09]-=-, Iterative Thresholding [FR07, BD] and Compressive Sampling Matching Pursuit (CoSaMP) [NT08]. Most of these approaches calculate the support of thesignal iteratively. Withthe support S of the signal ... |

159 | Basis pursuit. - Chen, Donoho - 1994 |

155 | Combining geometry and combinatorics: A unified approach to sparse signal recovery,” in - Berinde, Gilbert, et al. - 2008 |

150 | 2009), High-Resolution Radar via Compressed Sensing
- Hermann, Strohmer
(Show Context)
Citation Context ...ssible target scene as a matrix. If the number of targets is small enough, then the occupations of the grids will be sparse, and compressive sensing techniques can be used to recover the target scene =-=[HS07]-=-. 1.3.3 Biology Compressive sensing can also be used for efficient and low-cost sensing in the area of biological applications. In fact, the applications of Group Testing, an idea closely related to c... |

145 | Enhancing sparsity by reweighted ℓ1 minimization
- Candès, Wakin, et al.
- 2008
(Show Context)
Citation Context ...mization algorithms under the prior information. 1.5.4 An Analysis for Iterative Reweighted ℓ1 Minimization Algorithm Even though iterative reweighted ℓ1 minimization algorithms or related algorithms =-=[CWB08]-=- have been empirically observed to boost the recoverable sparsity thresholds for certain types of signals, no rigorous theoretical results have been established to prove this fact. In chapter 5, we tr... |

142 | Deterministic constructions of compressed sensing matrices - DeVore - 2007 |

136 |
Large Deviations Techniques and Applications, volume 38 of Applications of Mathematics
- Dembo, Zeitouni
- 1998
(Show Context)
Citation Context ... ≤ j ≤ (i−1) are, the probability of (1−Ii) taking the value ‘1’ is at most C D conditioned on Ij,1 ≤ j ≤ (i−1). By the well-known Chernoff bound for the sum of independent Bernoulli random variables =-=[DZ98]-=-, we know that if C D < 1 4 , P(Z ≥ 1 kClog(n/k)) ≤ e−H(14 4 ‖C D )kClog(n/k) . (2.5.10) Here H(a‖b) is the Kullback-Leibler divergence between two Bernoulli random variables with parameter a and b, n... |

120 | Unbalanced expanders and randomness extractors from Parvaresh-Vardy codes
- Guruswami, Umans, et al.
- 2009
(Show Context)
Citation Context ... expander graphs of Definition 2.6.1, there exists explicit construction for a class of expander graphs which are50 very close to the optimum expanders of Definition 2.6.1. Recently Guruswami et al. =-=[GUV07]-=-, based on the Parvaresh-Vardy codes [PV05], proved the following theorem: Theorem 2.7.8 (Explicit Constructionofexpander graphs). For any constant α > 0, and any n,k,ǫ > 0, there exists a (k,1−ǫ) exp... |

117 |
High-dimensional centrally symmetric polytopes with neighborliness proportional to dimension ”,
- Donoho
- 2006
(Show Context)
Citation Context ...ion from highly overcomplete dictionaries, and a related technique known as total variation minimization was proposed in image processing [GGI+ 02]. It then came as a breakthrough in [CT05, CT06] and =-=[Don06c]-=- that Basis Pursuit method was shown to be able to recover sparse signals with a linear fraction of non-zero elements. Certainly this requires some conditions on the measurement matrix A stronger than... |

113 | Combinatorial algorithms for compressed sensing,” ser.
- Cormode, Muthukrishnan
- 2006
(Show Context)
Citation Context ...ber of non-zero entries in the unknown vector; however, this may also be too high a complexity. Stage-wise OMP [DTDS07] has recently been proposed that solves the problem in O(nlogn) computations. In =-=[CM06]-=- a certain sparse coefficient matrix has been used, along with group testing, that yields an algorithm with O(klog 2 n) complexity; however, this comes at the expense of more measurements— O(klog 2 n)... |

99 | Interior-point method for nuclear norm approximation with application to system identification - Liu, Vandenberghe |

95 |
The widths of euclidean balls
- Garnaev, Gluskin
- 1984
(Show Context)
Citation Context ...ld select this null space so that the affine space’s intersection with the ℓ1 ball has minimal radius. The answer to this question was given by Kashin [Kas77] and later refined by Garnaev and Gluskin =-=[GG]-=-. Their existential results rely on randomly choosing the linear projections (or measurements) and are optimal in order of the number of measurements, which is within a multiplicative factor of what t... |

95 | Near-optimal sparse fourier representations via sampling. In - Gilbert, Guha, et al. - 2002 |

86 |
Correcting Errors Beyond the Guruswami-Sudan Radius in Polynomial Time.
- Parvaresh, Vardy
- 2005
(Show Context)
Citation Context ...exists explicit construction for a class of expander graphs which are50 very close to the optimum expanders of Definition 2.6.1. Recently Guruswami et al. [GUV07], based on the Parvaresh-Vardy codes =-=[PV05]-=-, proved the following theorem: Theorem 2.7.8 (Explicit Constructionofexpander graphs). For any constant α > 0, and any n,k,ǫ > 0, there exists a (k,1−ǫ) expander graph with left degree d = O ( (logn ... |

81 | On sparse representation in pairs of bases - Feuer, Nemirovski - 2003 |

69 |
Fast solution of L1-norm minimization problems when the solution may be sparse
- Donoho, Tsaig
- 2006
(Show Context)
Citation Context ...equires O(klognlogk) recovery complexity, yet only O(klogn) measurements. The Homotopy methods are able to recover the sparse solutions by reducing the computational complexity from O(n3 ) to O(nk2 ) =-=[DT06a]-=-. In [Fuc04], it was shown that by using the Vandermonde measurement matrix and linear programming, one can recover k nonzero elements using approximately 2k measurements when the nonzero elements are... |

66 | Convex polytopes, volume 221 of Graduate Texts in Mathematics - Grünbaum - 2003 |

65 |
Neighborliness of randomly projected simplices in high dimensions
- Donoho, Tanner
(Show Context)
Citation Context ...as been known in the literature as “Basis Bursuit” [Che95, CDS98]. Interestingly, the area of compressive sensing is closely connected to the related areas of coding [CT05], high-dimensional geometry =-=[DT05a]-=-, sparse approximation theory [Don06a], data streaming algorithms [CM06, GSTV05] and random sampling [GGI+ 02]. Furthermore, promising applications of compressive sensing are emerging in compressive i... |

63 | Tzafriri – Invertibility of 〈〈 large 〉〉 submatrices with applications to the geometry of Banach spaces and harmonic analysis - Bourgain, L - 1987 |

61 | On the rank minimization problem over a positive semidefinite linear matrix inequality. - Mesbahi, Papavassilopoulos - 1997 |

59 | Sparse recovery using sparse random matrices
- Berinde, Indyk
- 2008
(Show Context)
Citation Context ...set of parity nodes, with regular left degree d such that for any S ⊂ A, if |S| ≤ l then the set of neighbors N(S) of S has size N(S) > (1−ǫ)d|S|. The following claim follows from the Chernoff bounds =-=[BI08]-=- 1 . Claim 2.6.1. for any n 2 degree: and right set size: ≥ l ≥ 1, ǫ > 0 there exists a (l,1 − ǫ) expander with left ( n log( l d = O ) ) ǫ ( n llog( m = O ǫ2 l ) ) . Lemma 2.6.2 (RIP-1 property of th... |

54 | Thresholds for the recovery of sparse solutions via l1 minimization. - Donoho, Tanner - 2006 |

54 | Explicit constructions for compressed sensing of sparse signals.
- Indyk
- 2008
(Show Context)
Citation Context ...sparse signal using only O(klog(n/k)) measurements. 2.1.3 Recent Developments After the publication of our works [XH07a, XH07b], an explicit construction for compressive sensing matrices was given in =-=[Ind08]-=- which used extractors. But the construction in [Ind08] only works for recovering sparse signals with sublinear sparsity. In a more recent very interesting work [BGI+ 08], it was shown that the expand... |

53 | Iterative thresholding algorithms - Fornasier, Rauhut - 2008 |

51 |
Random projections of regular simplices.
- Affentranger, Schneider
- 1992
(Show Context)
Citation Context ...mplementary Grassmann angle always sum up to 1. There is apparently inconsistency in terms of the definition of which is “Grassmann angle” and which is “complementary Grassmann angle” between [Grü68],=-=[AS92]-=- and [VS92] etc. But we will stick to the earliest definition in [Grü68] for Grassmann angle: the measure of the subspaces that intersect trivially with a cone. 2 Note the dimension of the hypersphere... |

50 | Tzafriri – On a problem of Kadison and - Bourgain, L - 1991 |

50 |
Signal recovery and the large sieve,
- Donoho, Logan
- 1992
(Show Context)
Citation Context ...er handling the observation noise was introduced in [SS86]. In the meanwhile, some rigorous theoretical results started appearing in the late 1980’s, when Donoho and Stark [DS89] and Donoho and Logan =-=[DL92]-=- extended Logan’s 1965 result and quantified the ability to recover sparse reflectivity functions from bandlimited data. With the LASSO algorithm [Tib96] proposed as a method in statistics for sparse ... |

49 |
Nuclear norm minimization for the planted clique and biclique problems. Under review
- Ames, Vavasis
- 2009
(Show Context)
Citation Context ...d to recover low-rank matrices from a sparse collection of entries [CR09], Ames and Vavasis have used similar techniques to provide average case analysis of NPHARD combinatorial optimization problems =-=[AV09]-=-, and Vandenberghe and Zhang have proposed novel algorithms for identifying linear systems [LV08]. Moreover, fast algorithms for solving large-scale instances of this heuristic have been developed by ... |

46 | Efficient and robust compressed sensing using optimized expander graphs
- Jafarpour, Xu, et al.
(Show Context)
Citation Context ...) O nlog nk Compressible Noise [GUV07] ( ( )) ( R ( = )) ( ( ) ) Sparse [BIR08] RIP-1 Geometric O klog nk d‖x‖1 O log O nlog nk logR Compressible Noise [GUV07] ‖e‖1 Unique ( ( )) ( ( )) Almost Sparse =-=[JXHC]-=- Neighborhood Combinatorial O klog nk O(k) O nlog nk k-sparse Noise [GUV07] 6667 Chapter 3 Grassmann Angle Analytical Framework for Subspaces Balancedness It is well known that compressed sensing pro... |

42 | Rank minimization under LMI constraints: A framework for output feedback problems. - Ghaoui, Gahinet - 1993 |

41 |
Practical near-optimal sparse recovery
- Berinde, Indyk, et al.
(Show Context)
Citation Context ... a more recent very interesting work [BGI+ 08], it was shown that the expander graph-based measurement matrices can work with performance guarantees under the ℓ1 minimization methods. Indyk and Ruzic =-=[IR08]-=-, and Berinde, Indyk and Ruzic [BIR08] proposed new compressed sensing algorithms based on the properties of the expander graphs. Those algorithms are similar to the CoSaMP algorithm [NT08], from the ... |

35 | A frame construction and a universal distortion bound for sparse representations
- Akcakaya, Tarokh
(Show Context)
Citation Context ...sing the Vandermonde measurement matrix and linear programming, one can recover k nonzero elements using approximately 2k measurements when the nonzero elements are restricted to positive numbers. In =-=[AT07]-=-, motivated by Reed-Solomon codes rather than LDPC codes as20 in [SBB06b], a scheme of recovery complexity O(k2 ) is proposed to recover any signal vector with k nonzero elements using the Vandermond... |

34 |
Complexity of an optimum nonblocking switching network without reconnections”,
- Bassalygo, Pinsker
- 1974
(Show Context)
Citation Context ...V. Here we assume that each righthand side node also has a regular degree d, where cn = md. The existence of expander graphs has been known for quite some time since the work of Pinsker and Bassylago =-=[BP73]-=-, who used probabilistic arguments to prove their existence. Expander graphs arise in many applications: fast, distributed routing algorithms [PU89], LDPC codes [SS96], storage schemes [UW87], and pro... |

33 | Randomness conductors and constant degree expansions beyond the degree 2 barrier
- Capalbo, Reingold, et al.
(Show Context)
Citation Context ...ds. Bipartite expander graphs [CRVW02, SS96] are a certain class of graphs whose existence has been known for quite some time and whose recent explicit constructions are considered to be a major feat =-=[CRVW02]-=-. In some sense our approach is closest to that of [SBB06b], which is inspired by LDPCs, certain classes of21 which arerelatedto expander graphs[SS96], but inourworks we provide performance guarantee... |

31 | How neighborly can a centrally symmetric polytope be - Linial, Novik |

30 |
The widths of certain finite dimensional sets and classes of smooth functions
- Kashin
(Show Context)
Citation Context ...l geometrical problem, which is about how we should select this null space so that the affine space’s intersection with the ℓ1 ball has minimal radius. The answer to this question was given by Kashin =-=[Kas77]-=- and later refined by Garnaev and Gluskin [GG]. Their existential results rely on randomly choosing the linear projections (or measurements) and are optimal in order of the number of measurements, whi... |

28 | A remark on compressed sensing - Kashin, Temlyakov |

27 | Bounded orthogonal systems and the Λ(p)-set problem - Bourgain - 1989 |

24 |
Constructing disjoint paths on expander graphs
- Peleg, Upfal
- 1989
(Show Context)
Citation Context ...e some time since the work of Pinsker and Bassylago [BP73], who used probabilistic arguments to prove their existence. Expander graphs arise in many applications: fast, distributed routing algorithms =-=[PU89]-=-, LDPC codes [SS96], storage schemes [UW87], and proof systems [BSW99], to name a few. An explicit construction of constant regular left degree lossless (with β arbitrarily close to 1) expander graph ... |

19 |
Computational study and comparisons of LFT reducibility methods.
- Beck, D’Andrea
- 1998
(Show Context)
Citation Context ...t of inference with partial information [RS05] and Multi-task learning [AMP08]. In control theory, problems in controller design [EGG93, MP97], minimal realization theory [FHB01], and model reduction =-=[BD98]-=- can be formulated as rank minimization problems. Rank minimization also plays a key role in the study of embeddings of discrete metric spaces in Euclidean space [LLR95] and of learning structure in d... |

19 |
Properties of High-Pass Signals.
- Logan
- 1965
(Show Context)
Citation Context ...from the quasi-norm ℓ0, the solution of ℓ1 often comes as the sparsest solution. As mentioned in [CT08], this sparsity-promoting feature of ℓ1 minimization was already observed in the 1960’s by Logan =-=[Log65]-=-, where he proved probably the first ℓ1-uncertainty principle. Suppose we have the observation over time y(t) = f(t)+n(t),t ∈ R, (1.2.4) where f(t) is bandlimited, namely f ∈ B(Ω) := {f : ˆ f(ω) = 0 f... |

19 |
Explicit measurements with almost optimal thresholds for compressed sensing,” in ICASSP.
- Parvaresh, Hassibi
- 2008
(Show Context)
Citation Context ...b], a scheme of recovery complexity O(k2 ) is proposed to recover any signal vector with k nonzero elements using the Vandermonde measurement matrix. List decoding was proposed for similar schemes in =-=[PH08]-=-. Withtheexceptionofthemethodin[DeV07],thegrouptestingmethodsin[CM06] and the Vandermonde measurement matrix-based methods in [Fuc04, AT07], all the results described above hold with “high probability... |

16 | Rank minimization via online learning.
- Meka, Jain, et al.
- 2008
(Show Context)
Citation Context ...et A : Rn1×n2 m → R be a linear map, and let b ∈ Rm . The main optimization problem under study is minimize rank(X) subject to A(X) = b. (6.1.1) ThisproblemisknowntobeNP-HARDandisalsohardtoapproximate=-=[MJCD08]-=-. As mentioned above, a popular heuristic for this problem replaces the rank function with the sum of the singular values of the decision variable. Let σi(X) denote the i-th largest singular value of ... |

15 |
Grassmann angles of convex polytopes
- Grünbaum
- 1968
(Show Context)
Citation Context ...nt x. (Namely, SP-Cone(x) is conic hull of the point set (SP−x) and of course SP-Cone(x) has the origin of the coordinate system as its apex.) However, as noticed in the geometry for convex polytopes =-=[Grü68]-=-[Grü03], the SP-Cone(x) are identical for any x lying in the relative interior of the face F. This means that the probability PK,− is equal to P ′ x, regardless of the fact x is only a single point in... |

14 | Expander graph arguments for message passing algorithms
- Burshtein, Miller
- 2001
(Show Context)
Citation Context ...t regular left degree lossless (with β arbitrarily close to 1) expander graph is recently given in25 [CRVW02]. An existence result, which holds for the setting we are interested in, is the following =-=[BM01]-=-: Theorem 2.3.2. Let 0 < β < 1 and the ratio r = m n be given. Then for large enough n there exists a regular left degree c and a regular right degree d bipartite expander (αn,βc) for some 0 < α < 1 a... |

14 | Compressed network monitoring,”
- Coates, Pointurier, et al.
- 2007
(Show Context)
Citation Context ...n encoding only the difference of consecutive samples which results in more efficient coding rates [Cut]. In a similar fashion, this model is applicable to other problems like network monitoring (see =-=[CPR]-=- for an application 1 Quantitatively speaking, by sparsity we mean the number of nonzero elements of a vector x.130 of compressed sensing and nonlinear estimation in compressed network monitoring), D... |

14 | Weighted ℓ1 minimization for sparse recovery with prior information
- Khajehnejad, Xu, et al.
- 2011
(Show Context)
Citation Context ...tion about the support set of the signal x, which can be used in the analysis for the weighted ℓ1 minimization using thenull-space GrassmannAngle analysisapproachforweighted ℓ1 minimization algorithm =-=[KXAH09a]-=-.172 5.5 The Grassmann Angle Approach for the Reweighted ℓ1 Minimization In the previous work [KXAH09a], the authors have shown that by exploiting certain prior information about the original signal,... |

12 |
Maryam Fazel, and Pablo A Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization.
- Recht
- 2010
(Show Context)
Citation Context ...e rank minimization problem consists of finding the minimum rank matrix in a convex constraint set. Though this problem is NP-Hard even when the constraints are linear, a recent paper by Recht et al. =-=[RFP]-=- showed that most instances of the linearly constrained rank minimization problem could be solved in polynomial time as long as there were sufficiently many linearly independent constraints. Specifica... |

12 |
Geometrı́a integral en espacios de curvatura constante.” Rep
- Santaló
- 1952
(Show Context)
Citation Context ...anmanifoldGr(n−m)(n) intersecting non-trivially with the cone SP-Cone(x) formed by observing the skewed crosspolytope SP from the relative interior point x ∈ F. Building on the works by L. A. Santalö =-=[San52]-=- and P. McMullen [McM75] in high dimensional integral geometry and convex polytopes, the complementary Grassmann angle for the (k − 1)-dimensional face F can be explicitly expressed as the sum of prod... |

11 | Almost Euclidean subspaces of ℓ N 1 via expander codes
- Guruswami, Lee, et al.
- 2008
(Show Context)
Citation Context ... section. In order to make the analysis for almost k-sparse signals simpler we will use a optimized expander graph which is right-regular as well 4 . The following lemma which appears as Lemma 2.3 in =-=[GLR08]-=- gives us a way to construct right-regular expanders from any expander graph without disturbing its characteristics. Lemma 2.7.10 (right-regular expanders). From any left-regular (k,1−ǫ) unbalanced ex... |

10 |
Gitterpunktanzahl im Simplex und Wills’sche Vermutung
- Hadwiger
- 1979
(Show Context)
Citation Context ...ng the internal angle exponent in [Don06c]. First, we notice that Con F ⊥ ,G is a (l −k)-dimensional cone. Also, all the vectors115 (x1,...,xn) in the cone Con F ⊥ ,G take the form in (3.12.6). From =-=[Had79]-=-, ∫ Con F ⊥ ,G ∫ ∞ × 0 e −‖x‖2 dx = β(F,G)Vl−k−1(S l−k−1 ) e −r2 r l−k−1 dx = β(F,G)·π (l−k)/2 , (3.12.7) where Vl−k−1(S l−k−1 ) is the spherical volume of the (l − k − 1)-dimensional sphere S l−k−1 .... |

9 | Efficient and guaranteed rank minimization by atomic decomposition - Lee, Bresler |

8 |
On cone-invariant linear matrix inequalities.
- Parrilo, Khatri
- 2000
(Show Context)
Citation Context ... to produce very low-rank solutions in practice, but, until very recently, conditions where the heuristic succeeded were only available in cases that could also be solved by elementary linear algebra =-=[PK00]-=-. As mentioned above, the first non-trivial sufficient conditions that guaranteed the success of the nuclear norm heuristic were provided in [RFP]. Theinitialresultsin[RFP]buildonseminaldevelopmentsin... |

6 | M.: Random projections of regular polytopes
- Böröczky, Henk
- 1999
(Show Context)
Citation Context ...n in [Don06c]) that this characterization of the matrix A is in fact a necessary and sufficient condition for (5.1.2) to produce the solution of (5.1.1). Furthermore, using the results of [VS92][AS92]=-=[KBH99]-=-, it can be shown that if the matrix A has i.i.d. zero-mean Gaussian entries with overwhelming probability it also constitutes a k-neighborly polytope. The precise relation between m and k in order fo... |

6 |
Detection and estimation with compressive measurements”, Rice ECE Department
- Davenport, Wakin, et al.
- 2006
(Show Context)
Citation Context ...y cases the signal recovery problem (which compressed sensing attempts to address) is a detection or estimation problem in some statistical setting. Some recent work along these lines can be found in =-=[MDB]-=- (which considers compressed detection and estimation) and [JXC08] (on Bayesian compressed sensing). In other cases, compressed sensing may be the inner loop of a larger estimation problem that feeds ... |

4 | Some inequalities for Gaussian processes and applications - Gordan - 1985 |

3 |
Near-Optimal Sparse Recovery in the ℓ1-norm
- Indyk, Ruzic
- 2008
(Show Context)
Citation Context ...[BGI+ 08], it was shown that the expander graph-based measurement matrices can work with performance guarantees under the ℓ1 minimization methods. Indyk and Ruzic [IR08], and Berinde, Indyk and Ruzic =-=[BIR08]-=- proposed new compressed sensing algorithms based on the properties of the expander graphs. Those algorithms are similar to the CoSaMP algorithm [NT08], from the orthogonal matching framework, and are... |

3 |
Differential Quantization of Communication Signals," U.S. Patent 2,605,361, filed June 29, 1950, issued July 29
- Cutler
- 1952
(Show Context)
Citation Context ... captured in the above nonuniform sparse model. DPCM encoders are example of systems that are based on encoding only the difference of consecutive samples which results in more efficient coding rates =-=[Cut]-=-. In a similar fashion, this model is applicable to other problems like network monitoring (see [CPR] for an application 1 Quantitatively speaking, by sparsity we mean the number of nonzero elements o... |

3 |
Sublinear, Small-space Approximation of Compressible Signals and Uniform Algorithmic Embeddings”,(Preprint
- Gilbert, Strauss, et al.
- 2005
(Show Context)
Citation Context ...ever, this comes at the expense of more measurements— O(klog 2 n) measurements, as opposed to the O(klogn) measurements required of the aforementioned methods. Chaining pursuit has been introduced in =-=[GSTV05]-=-, which has complexity O(klog 2 nlog 2 k) and also requires O(klog 2 n) measurements. From the number of measurements needed, we can see that both the group testing methods [CM06] and the chaining pur... |

2 | Gaussian processes and almost spherical sections of convex bodies - Gordan - 1988 |

1 | Complements of subspaces of lnp, p ≥ 1, which are uniquely determined - Bourgain, Tzafrir - 1987 |

1 | Nearoptimalsignalrecovery fromrandomprojections: Universal encoding strategies - Tao - 2006 |

1 |
Candès and Terence Tao. Reflections on compressed sensing
- Emmanuel
(Show Context)
Citation Context ...he signals, the ℓ1 minimization will succeed [CT05, Don06c]. That is, even though ℓ1-norm is different from the quasi-norm ℓ0, the solution of ℓ1 often comes as the sparsest solution. As mentioned in =-=[CT08]-=-, this sparsity-promoting feature of ℓ1 minimization was already observed in the 1960’s by Logan [Log65], where he proved probably the first ℓ1-uncertainty principle. Suppose we have the observation o... |

1 | Empirical bayes estimation ofasparse vector of gene expression - Sabatti - 2005 |

1 |
Onsparserepresentations inarbitraryredundant bases
- J-JFuchs
- 2004
(Show Context)
Citation Context ...ognlogk) recovery complexity, yet only O(klogn) measurements. The Homotopy methods are able to recover the sparse solutions by reducing the computational complexity from O(n3 ) to O(nk2 ) [DT06a]. In =-=[Fuc04]-=-, it was shown that by using the Vandermonde measurement matrix and linear programming, one can recover k nonzero elements using approximately 2k measurements when the nonzero elements are restricted ... |

1 | Weiyu Xu, Amir SalmanAvestimehr, and Babak Hassibi. Weighted ℓ1 minimization for sparse recovery with prior informationn - AminKhajehnejad - 2009 |

1 | Compressedsensing meets bionformatics: a new dna microarray architecture - Milenkovic, Simunic-Rosing - 2007 |

1 | Rudelson andRoman Vershynin. Geometric approach to error correcting codes and reconstruction of signals - Mark - 2005 |

1 |
Necessary andsufficient conditions for success of the nuclear norm heuristic for rank minimization
- BenjaminRecht, andBabakHassibi
- 2008
(Show Context)
Citation Context ...ition for the solution of the nuclear norm heuristic to coincide with the minimum rank solution in an affine space. This condition is akin to the one in compressed sensing [SXH08a], first reported in =-=[RXH08b]-=-. The condition characterizes a particular property of the null-space of the linear map which defines the affine space. We then show that when the null space is sampled from the uniform distribution o... |