Results 1  10
of
88
Sparsity and Incoherence in Compressive Sampling
, 2006
"... We consider the problem of reconstructing a sparse signal x 0 ∈ R n from a limited number of linear measurements. Given m randomly selected samples of Ux 0, where U is an orthonormal matrix, we show that ℓ1 minimization recovers x 0 exactly when the number of measurements exceeds m ≥ Const · µ 2 (U) ..."
Abstract

Cited by 237 (14 self)
 Add to MetaCart
We consider the problem of reconstructing a sparse signal x 0 ∈ R n from a limited number of linear measurements. Given m randomly selected samples of Ux 0, where U is an orthonormal matrix, we show that ℓ1 minimization recovers x 0 exactly when the number of measurements exceeds m ≥ Const · µ 2 (U) · S · log n, where S is the number of nonzero components in x 0, and µ is the largest entry in U properly normalized: µ(U) = √ n · maxk,j Uk,j. The smaller µ, the fewer samples needed. The result holds for “most ” sparse signals x 0 supported on a fixed (but arbitrary) set T. Given T, if the sign of x 0 for each nonzero entry on T and the observed values of Ux 0 are drawn at random, the signal is recovered with overwhelming probability. Moreover, there is a sense in which this is nearly optimal since any method succeeding with the same probability would require just about this many samples.
Sensing by Random Convolution
 IEEE Int. Work. on Comp. Adv. MultiSensor Adaptive Proc., CAMPSAP
, 2007
"... Abstract. This paper outlines a new framework for compressive sensing: convolution with a random waveform followed by random time domain subsampling. We show that sensing by random convolution is a universally efficient data acquisition strategy in that an ndimensional signal which is S sparse in a ..."
Abstract

Cited by 114 (8 self)
 Add to MetaCart
(Show Context)
Abstract. This paper outlines a new framework for compressive sensing: convolution with a random waveform followed by random time domain subsampling. We show that sensing by random convolution is a universally efficient data acquisition strategy in that an ndimensional signal which is S sparse in any fixed representation can be recovered from m � S log n measurements. We discuss two imaging scenarios — radar and Fourier optics — where convolution with a random pulse allows us to seemingly superresolve finescale features, allowing us to recover highresolution signals from lowresolution measurements. 1. Introduction. The new field of compressive sensing (CS) has given us a fresh look at data acquisition, one of the fundamental tasks in signal processing. The message of this theory can be summarized succinctly [7, 8, 10, 15, 32]: the number of measurements we need to reconstruct a signal depends on its sparsity rather than its bandwidth. These measurements, however, are different than the samples that
Bolasso: model consistent lasso estimation through the bootstrap
 In Proceedings of the Twentyfifth International Conference on Machine Learning (ICML
, 2008
"... We consider the leastsquare linear regression problem with regularization by the ℓ1norm, a problem usually referred to as the Lasso. In this paper, we present a detailed asymptotic analysis of model consistency of the Lasso. For various decays of the regularization parameter, we compute asymptotic ..."
Abstract

Cited by 84 (15 self)
 Add to MetaCart
(Show Context)
We consider the leastsquare linear regression problem with regularization by the ℓ1norm, a problem usually referred to as the Lasso. In this paper, we present a detailed asymptotic analysis of model consistency of the Lasso. For various decays of the regularization parameter, we compute asymptotic equivalents of the probability of correct model selection (i.e., variable selection). For a specific rate decay, we show that the Lasso selects all the variables that should enter the model with probability tending to one exponentially fast, while it selects all other variables with strictly positive probability. We show that this property implies that if we run the Lasso for several bootstrapped replications of a given sample, then intersecting the supports of the Lasso bootstrap estimates leads to consistent model selection. This novel variable selection algorithm, referred to as the Bolasso, is compared favorably to other linear regression methods on synthetic data and datasets from the UCI machine learning repository. 1.
Breaking the multicommodity flow barrier for O( √ logn)approximations to sparsest cut
 In FOCS
, 2009
"... This paper ties the line of work on algorithms that find an O ( √ log n)approximation to the sparsest cut together with the line of work on algorithms that run in subquadratic time by using only singlecommodity flows. We present an algorithm that simultaneously achieves both goals, finding an O ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
This paper ties the line of work on algorithms that find an O ( √ log n)approximation to the sparsest cut together with the line of work on algorithms that run in subquadratic time by using only singlecommodity flows. We present an algorithm that simultaneously achieves both goals, finding an O ( √ log(n)/ε)approximation using O(n ε log O(1) n) maxflows. The core of the algorithm is a stronger, algorithmic version of Arora et al.’s structure theorem, where we show that matchingchaining argument at the heart of their proof can be viewed as an algorithm that finds good augmenting paths in certain geometric multicommodity flow networks. By using that specialized algorithm in place of a blackbox solver, we are able to solve those instances much more efficiently. We also show the cutmatching game framework can not achieve an approximation any better than Ω(log(n) / log log(n)) without rerouting flow. 1
Accelerated appearanceonly slam
 in IEEE Conference on Robotics and Automation
, 2008
"... Abstract — This paper describes a probabilistic bailout condition for multihypothesis testing based on Bennett’s inequality. We investigate the use of the test for increasing the speed of an appearanceonly SLAM system where locations are recognised on the basis of their sensory appearance. The bai ..."
Abstract

Cited by 24 (7 self)
 Add to MetaCart
(Show Context)
Abstract — This paper describes a probabilistic bailout condition for multihypothesis testing based on Bennett’s inequality. We investigate the use of the test for increasing the speed of an appearanceonly SLAM system where locations are recognised on the basis of their sensory appearance. The bailout condition yields speed increases between 25x50x on real data, with only slight degradation in accuracy. We demonstrate the system performing realtime loop closure detection on a mobile robot over multiplekilometre paths in initially unknown outdoor environments. I.
Spectral norm of products of random and deterministic matrices
"... Abstract. We study the spectral norm of matrices M that can be factored as M = BA, where A is a random matrix with independent mean zero entries and B is a fixed matrix. Under the (4 + ε)th moment assumption on the entries of A, we show that the spectral norm of such an m×n matrix M is bounded by √ ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
(Show Context)
Abstract. We study the spectral norm of matrices M that can be factored as M = BA, where A is a random matrix with independent mean zero entries and B is a fixed matrix. Under the (4 + ε)th moment assumption on the entries of A, we show that the spectral norm of such an m×n matrix M is bounded by √ m + √ n, which is sharp. In other words, in regard to the spectral norm, products of random and deterministic matrices behave similarly to random matrices with independent entries. This result along with the previous work of M. Rudelson and the author implies that the smallest singular value of a random m × n matrix with i.i.d. mean zero entries and bounded (4 + ε)th moment is bounded below by √ m − √ n − 1 with high probability. 1.
Rigorous Verification, Validation, Uncertainty Quantification and Certification through ConcentrationofMeasure Inequalitites
 Computer Methods in Applied Mechanics and Engineering
, 2008
"... We apply concentrationofmeasure inequalities to the quantification of uncertainties in the performance of engineering systems. Specifically, we envision uncertainty quantification in the context of certification, i. e., as a tool for deciding whether a system is likely to perform safely and relia ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
We apply concentrationofmeasure inequalities to the quantification of uncertainties in the performance of engineering systems. Specifically, we envision uncertainty quantification in the context of certification, i. e., as a tool for deciding whether a system is likely to perform safely and reliably within design specifications. We show that concentrationofmeasure inequalities rigorously bound probabilities of failure and thus supply conservative certification criteria. In addition, they supply unambiguous quantitative definitions of terms such as margins, epistemic and aleatoric uncertainties, verification and validation measures, confidence factors, and others, as well as providing clear procedures for computing these quantities by means of concerted simulation and experimental campaigns. We also investigate numerically the tightness of concentrationofmeasure inequalities with the aid of an imploding ring example. Our numerical tests establish the robustness and viability of concentrationofmeasure inequalities as a basis for certification in that particular example of application. 1