Results 1 
8 of
8
New analysis of manifold embeddings and signal recovery from compressive measurements. arXiv:1306.4748
"... Compressive Sensing (CS) exploits the surprising fact that the information contained in a sparse signal can be preserved in a small number of compressive, often random linear measurements of that signal. Strong theoretical guarantees have been established concerning the embedding of a sparse signal ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
Compressive Sensing (CS) exploits the surprising fact that the information contained in a sparse signal can be preserved in a small number of compressive, often random linear measurements of that signal. Strong theoretical guarantees have been established concerning the embedding of a sparse signal family under a random measurement operator and on the accuracy to which sparse signals can be recovered from noisy compressive measurements. In this paper, we address similar questions in the context of a different modeling framework. Instead of sparse models, we focus on the broad class of manifold models, which can arise in both parametric and nonparametric signal families. Using tools from the theory of empirical processes, we improve upon previous results concerning the embedding of lowdimensional manifolds under random measurement operators. We also establish both deterministic and probabilistic instanceoptimal bounds in `2 for manifoldbased signal recovery and parameter estimation from noisy compressive measurements. In line with analogous results for sparsitybased CS, we conclude that much stronger bounds are possible in the probabilistic setting. Our work supports the growing evidence that manifoldbased models can be used with high accuracy in compressive signal processing.
Compressive acquisition of linear dynamical systems
, 2013
"... Compressive sensing (CS) enables the acquisition and recovery of sparse signals and images at sampling rates significantly below the classical Nyquist rate. Despite significant progress in the theory and methods of CS, little headway has been made in compressive video acquisition and recovery. Vid ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
(Show Context)
Compressive sensing (CS) enables the acquisition and recovery of sparse signals and images at sampling rates significantly below the classical Nyquist rate. Despite significant progress in the theory and methods of CS, little headway has been made in compressive video acquisition and recovery. Video CS is complicated by the ephemeral nature of dynamic events, which makes direct extensions of standard CS imaging architectures and signal models difficult. In this paper, we develop a new framework for video CS for dynamic textured scenes that models the evolution of the scene as a linear dynamical system (LDS). This reduces the video recovery problem to first estimating the model parameters of the LDS from compressive measurements and then reconstructing the image frames. We exploit the lowdimensional dynamic parameters (the state sequence) and highdimensional static parameters (the observation matrix) of the LDS to devise a novel compressive measurement strategy that measures only the timevarying parameters at each instant and accumulates measurements over time to estimate the timeinvariant parameters. This enables us to lower the compressive measurement rate considerably. We validate our approach and demonstrate its effectiveness with a range of experiments involving video recovery and scene classification.
Joint recovery algorithms using difference of innovations for distributed
"... compressed sensing ..."
Average Case Analysis of HighDimensional BlockSparse Recovery and Regression for Arbitrary Designs
, 2015
"... This paper studies conditions for highdimensional inference when the set of observations is given by a linear combination of a small number of groups of columns of a design matrix, termed the “blocksparse” case. In this regard, it first specifies conditions on the design matrix under which most of ..."
Abstract
 Add to MetaCart
This paper studies conditions for highdimensional inference when the set of observations is given by a linear combination of a small number of groups of columns of a design matrix, termed the “blocksparse” case. In this regard, it first specifies conditions on the design matrix under which most of its block submatrices are well conditioned. It then leverages this result for averagecase analysis of highdimensional blocksparse recovery and regression. In contrast to earlier works: (i) this paper provides conditions on arbitrary designs that can be explicitly computed in polynomial time, (ii) the provided conditions translate into nearoptimal scaling of the number of observations with the number of active blocks of the design matrix, and (iii) the conditions suggest that the spectral norm, rather than the column/block coherences, of the design matrix fundamentally limits the performance of computational methods in highdimensional settings.
Average Case Analysis of HighDimensional BlockSparse Recovery and Regression for Arbitrary Designs
"... Abstract This paper studies conditions for highdimensional inference when the set of observations is given by a linear combination of a small number of groups of columns of a design matrix, termed the "blocksparse" case. In this regard, it first specifies conditions on the design matrix ..."
Abstract
 Add to MetaCart
Abstract This paper studies conditions for highdimensional inference when the set of observations is given by a linear combination of a small number of groups of columns of a design matrix, termed the "blocksparse" case. In this regard, it first specifies conditions on the design matrix under which most of its block submatrices are well conditioned. It then leverages this result for averagecase analysis of highdimensional blocksparse recovery and regression. In contrast to earlier works: (i) this paper provides conditions on arbitrary designs that can be explicitly computed in polynomial time, (ii) the provided conditions translate into nearoptimal scaling of the number of observations with the number of active blocks of the design matrix, and (iii) the conditions suggest that the spectral norm, rather than the column/block coherences, of the design matrix fundamentally limits the performance of computational methods in highdimensional settings.
Unlocking Energy Neutrality in Energy Harvesting Wireless Sensor Networks: An Approach Based on Distributed Compressed Sensing
"... ar ..."
(Show Context)
Operational Rate–Distortion Performance of Single–source and Distributed Compressed Sensing
"... Abstract—We consider correlated and distributed sources without cooperation at the encoder. For these sources, we derive the best achievable performance in the ratedistortion sense of any distributed compressed sensing scheme, under the constraint of high–rate quantization. Moreover, under this mo ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—We consider correlated and distributed sources without cooperation at the encoder. For these sources, we derive the best achievable performance in the ratedistortion sense of any distributed compressed sensing scheme, under the constraint of high–rate quantization. Moreover, under this model we derive a closed–form expression of the rate gain achieved by taking into account the correlation of the sources at the receiver and a closed–form expression of the average performance of the oracle receiver for independent and joint reconstruction. Finally, we show experimentally that the exploitation of the correlation between the sources performs close to optimal and that the only penalty is due to the missing knowledge of the sparsity support as in (non distributed) compressed sensing. Even if the derivation is performed in the large system regime, where signal and system parameters tend to infinity, numerical results show that the equations match simulations for parameter values of practical interest. Index Terms—Compressed sensing, rate–distortion function, distributed source coding, Slepian–Wolf coding. I.