Results 1  10
of
95
Simultaneously Structured Models with Application to Sparse and Lowrank Matrices
, 2014
"... The topic of recovery of a structured model given a small number of linear observations has been wellstudied in recent years. Examples include recovering sparse or groupsparse vectors, lowrank matrices, and the sum of sparse and lowrank matrices, among others. In various applications in signal p ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
The topic of recovery of a structured model given a small number of linear observations has been wellstudied in recent years. Examples include recovering sparse or groupsparse vectors, lowrank matrices, and the sum of sparse and lowrank matrices, among others. In various applications in signal processing and machine learning, the model of interest is known to be structured in several ways at the same time, for example, a matrix that is simultaneously sparse and lowrank. Often norms that promote each individual structure are known, and allow for recovery using an orderwise optimal number of measurements (e.g., `1 norm for sparsity, nuclear norm for matrix rank). Hence, it is reasonable to minimize a combination of such norms. We show that, surprisingly, if we use multiobjective optimization with these norms, then we can do no better, orderwise, than an algorithm that exploits only one of the present structures. This result suggests that to fully exploit the multiple structures, we need an entirely new convex relaxation, i.e. not one that is a function of the convex relaxations used for each structure. We then specialize our results to the case of sparse and lowrank matrices. We show that a nonconvex formulation of the problem can recover the model from very few measurements, which is on the order of the degrees of freedom of the matrix, whereas the convex problem obtained from a combination of the `1 and nuclear norms requires many more measurements. This proves an orderwise gap between the performance of the convex and nonconvex recovery problems in this case. Our framework applies to arbitrary structureinducing norms as well as to a wide range of measurement ensembles. This allows us to give performance bounds for problems such as sparse phase retrieval and lowrank tensor completion.
Generalized sampling and infinitedimensional compressed sensing
"... We introduce and analyze an abstract framework, and corresponding method, for compressed sensing in infinite dimensions. This extends the existing theory from signals in finitedimensional vectors spaces to the case of separable Hilbert spaces. We explain why such a new theory is necessary, and demo ..."
Abstract

Cited by 33 (20 self)
 Add to MetaCart
(Show Context)
We introduce and analyze an abstract framework, and corresponding method, for compressed sensing in infinite dimensions. This extends the existing theory from signals in finitedimensional vectors spaces to the case of separable Hilbert spaces. We explain why such a new theory is necessary, and demonstrate that existing finitedimensional techniques are illsuited for solving a number of important problems. This work stems from recent developments in generalized sampling theorems for classical (Nyquist rate) sampling that allows for reconstructions in arbitrary bases. The main conclusion of this paper is that one can extend these ideas to allow for significant subsampling of sparse or compressible signals. The key to these developments is the introduction of two new concepts in sampling theory, the stable sampling rate and the balancing property, which specify how to appropriately discretize the fundamentally infinitedimensional reconstruction problem.
Universal and efficient compressed sensing by spread spectrum and application to realistic Fourier imaging techniques
, 2011
"... We advocate a compressed sensing strategy that consists of multiplying the signal of interest by a wide bandwidth modulation before projection onto randomly selected vectors of an orthonormal basis. Firstly, in a digital setting with random modulation, considering a whole class of sensing bases incl ..."
Abstract

Cited by 27 (12 self)
 Add to MetaCart
(Show Context)
We advocate a compressed sensing strategy that consists of multiplying the signal of interest by a wide bandwidth modulation before projection onto randomly selected vectors of an orthonormal basis. Firstly, in a digital setting with random modulation, considering a whole class of sensing bases including the Fourier basis, we prove that the technique is universal in the sense that the required number of measurements for accurate recovery is optimal and independent of the sparsity basis. This universality stems from a drastic decrease of coherence between the sparsity and the sensing bases, which for a Fourier sensing basis relates to a spread of the original signal spectrum by the modulation (hence the name “spread spectrum”). The approach is also efficient as sensing matrices with fast matrix multiplication algorithms can be used, in particular in the case of Fourier measurements. Secondly, these results are confirmed by a numerical analysis of the phase transition of the ℓ1minimization problem. Finally, we show that the spread spectrum technique remains effective in an analog setting with chirp modulation for application to realistic Fourier imaging. We illustrate these findings in the context of radio interferometry. 1 1
Robust Subspace Clustering
, 2013
"... Subspace clustering refers to the task of finding a multisubspace representation that best fits a collection of points taken from a highdimensional space. This paper introduces an algorithm inspired by sparse subspace clustering (SSC) [17] to cluster noisy data, and develops some novel theory demo ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
Subspace clustering refers to the task of finding a multisubspace representation that best fits a collection of points taken from a highdimensional space. This paper introduces an algorithm inspired by sparse subspace clustering (SSC) [17] to cluster noisy data, and develops some novel theory demonstrating its correctness. In particular, the theory uses ideas from geometric functional analysis to show that the algorithm can accurately recover the underlying subspaces under minimal requirements on their orientation, and on the number of samples per subspace. Synthetic as well as real data experiments complement our theoretical study, illustrating our approach and demonstrating its effectiveness.
A NonUniform Sampler for Wideband SpectrallySparse Environments
, 2009
"... We present the first custom integrated circuit implementation of the compressed sensing based nonuniform sampler (NUS). By sampling signals nonuniformly, the average sample rate can be more than a magnitude lower than the Nyquist rate, provided that these signals have a relatively low information ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
We present the first custom integrated circuit implementation of the compressed sensing based nonuniform sampler (NUS). By sampling signals nonuniformly, the average sample rate can be more than a magnitude lower than the Nyquist rate, provided that these signals have a relatively low information content as measured by the sparsity of their spectrum. The hardware design combines a wideband IndiumPhosphide (InP) heterojunction bipolar transistor (HBT) sampleandhold with a commercial offtheshelf (COTS) analogtodigital converter (ADC) to digitize an 800 MHz to 2 GHz band (having 100 MHz of noncontiguous spectral content) at an average sample rate of 236 Msps. Signal reconstruction is performed via a nonlinear compressed sensing algorithm, and an efficient GPU implementation is discussed. Measured biterrorrate (BER) data for a GSM channel is presented, and comparisons to a conventional wideband 4.4 Gsps ADC are made.
Breaking the coherence barrier: asymptotic incoherence and asymptotic sparsity in compressed sensing
, 2013
"... In this paper we bridge the substantial gap between existing compressed sensing theory and its current use in realworld applications. 1 We do so by introducing a new mathematical framework for overcoming the socalled coherence ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
(Show Context)
In this paper we bridge the substantial gap between existing compressed sensing theory and its current use in realworld applications. 1 We do so by introducing a new mathematical framework for overcoming the socalled coherence
Hypothesis testing in highdimensional regression under the gaussian random design model: Asymptotic theory. arXiv
, 2013
"... We consider linear regression in the highdimensional regime in which the number of observations n is smaller than the number of parameters p. A very successful approach in this setting uses ℓ1penalized least squares (a.k.a. the Lasso) to search for a subset of s0 < n parameters that best explai ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
We consider linear regression in the highdimensional regime in which the number of observations n is smaller than the number of parameters p. A very successful approach in this setting uses ℓ1penalized least squares (a.k.a. the Lasso) to search for a subset of s0 < n parameters that best explain the data, while setting the other parameters to zero. A considerable amount of work has been devoted to characterizing the estimation and model selection problems within this approach. In this paper we consider instead the fundamental, but far less understood, question of statistical significance. We study this problem under the random design model in which the rows of the design matrix are i.i.d. and drawn from a highdimensional Gaussian distribution. This situation arises, for instance, in learning highdimensional Gaussian graphical models. Leveraging on an asymptotic distributional characterization of regularized least squares estimators, we develop a procedure for computing pvalues and hence assessing statistical significance for hypothesis testing. We characterize the statistical power of this procedure, and evaluate it on synthetic and real data, comparing it with earlier proposals. Finally, we provide an upper bound on the minimax power of tests with a given significance level and show that our proposed procedure achieves this bound in case of design matrices with i.i.d. Gaussian entries. 1