Results 1  10
of
20
Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
, 2004
"... Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear m ..."
Abstract

Cited by 1513 (20 self)
 Add to MetaCart
Suppose we are given a vector f in RN. How many linear measurements do we need to make about f to be able to recover f to within precision ɛ in the Euclidean (ℓ2) metric? Or more exactly, suppose we are interested in a class F of such objects— discrete digital signals, images, etc; how many linear measurements do we need to recover objects from this class to within accuracy ɛ? This paper shows that if the objects of interest are sparse or compressible in the sense that the reordered entries of a signal f ∈ F decay like a powerlaw (or if the coefficient sequence of f in a fixed basis decays like a powerlaw), then it is possible to reconstruct f to within very high accuracy from a small number of random measurements. typical result is as follows: we rearrange the entries of f (or its coefficients in a fixed basis) in decreasing order of magnitude f  (1) ≥ f  (2) ≥... ≥ f  (N), and define the weakℓp ball as the class F of those elements whose entries obey the power decay law f  (n) ≤ C · n −1/p. We take measurements 〈f, Xk〉, k = 1,..., K, where the Xk are Ndimensional Gaussian
Minimax rates of estimation for highdimensional linear regression over balls
, 2009
"... Abstract—Consider the highdimensional linear regression model,where is an observation vector, is a design matrix with, is an unknown regression vector, and is additive Gaussian noise. This paper studies the minimax rates of convergence for estimating in eitherloss andprediction loss, assuming tha ..."
Abstract

Cited by 97 (19 self)
 Add to MetaCart
(Show Context)
Abstract—Consider the highdimensional linear regression model,where is an observation vector, is a design matrix with, is an unknown regression vector, and is additive Gaussian noise. This paper studies the minimax rates of convergence for estimating in eitherloss andprediction loss, assuming that belongs to anball for some.Itisshown that under suitable regularity conditions on the design matrix, the minimax optimal rate inloss andprediction loss scales as. The analysis in this paper reveals that conditions on the design matrix enter into the rates forerror andprediction error in complementary ways in the upper and lower bounds. Our proofs of the lower bounds are information theoretic in nature, based on Fano’s inequality and results on the metric entropy of the balls, whereas our proofs of the upper bounds are constructive, involving direct analysis of least squares overballs. For the special case, corresponding to models with an exact sparsity constraint, our results show that although computationally efficientbased methods can achieve the minimax rates up to constant factors, they require slightly stronger assumptions on the design matrix than optimal algorithms involving leastsquares over theball. Index Terms—Compressed sensing, minimax techniques, regression analysis. I.
The Gelfand widths of ℓpballs for 0 < p ≤ 1
 J. Complexity
"... We provide sharp lower and upper bounds for the Gelfand widths of ℓpballs in the Ndimensional ℓ N qspace for 0 < p ≤ 1 and p < q ≤ 2. Such estimates are highly relevant to the novel theory of compressive sensing, and our proofs rely on methods from this area. ..."
Abstract

Cited by 18 (9 self)
 Add to MetaCart
(Show Context)
We provide sharp lower and upper bounds for the Gelfand widths of ℓpballs in the Ndimensional ℓ N qspace for 0 < p ≤ 1 and p < q ≤ 2. Such estimates are highly relevant to the novel theory of compressive sensing, and our proofs rely on methods from this area.
The Gelfand widths of `pballs for 0 < p ≤ 1
 J. Complexity
"... We provide sharp lower and upper bounds for the Gelfand widths of `pballs in the Ndimensional `Nqspace for 0 < p ≤ 1 and p < q ≤ 2. Such estimates are highly relevant to the novel theory of compressive sensing, and our proofs rely on methods from this area. ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
(Show Context)
We provide sharp lower and upper bounds for the Gelfand widths of `pballs in the Ndimensional `Nqspace for 0 < p ≤ 1 and p < q ≤ 2. Such estimates are highly relevant to the novel theory of compressive sensing, and our proofs rely on methods from this area.
Encoding the ℓp Ball from Limited Measurements
"... We address the problem of encoding signals which are sparse, i.e. signals that are concentrated on a set of small support. Mathematically, such signals are modeled as elements in the ℓp ball for some p ≤ 1. We describe a strategy for encoding elements of the ℓp ball which is universal in that 1) the ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
We address the problem of encoding signals which are sparse, i.e. signals that are concentrated on a set of small support. Mathematically, such signals are modeled as elements in the ℓp ball for some p ≤ 1. We describe a strategy for encoding elements of the ℓp ball which is universal in that 1) the encoding procedure is completely generic, and does not depend on p (the sparsity of the signal), and 2) it achieves nearoptimal minimax performance simultaneously for all p<1. What makes our coding procedure unique is that it requires only a limited number of nonadaptive measurements of the underlying sparse signal; we show that nearoptimal performance can be obtained with a number of measurements that is roughly proportional to the number of bits used by the encoder. We end by briefly discussing these results in the context of image compression. 1
Entropy numbers of general diagonal operators
 Rev. Mat. Complut
"... ABSTRACT We determine the asymptotic behavior of the entropy numbers of diagonal op under mild regularity and decay conditions on the generating sequence (σ k ). Our results extend the known estimates for polynomial and logarithmic diagonals (σ k ). Moreover, we also consider some exotic intermedi ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
ABSTRACT We determine the asymptotic behavior of the entropy numbers of diagonal op under mild regularity and decay conditions on the generating sequence (σ k ). Our results extend the known estimates for polynomial and logarithmic diagonals (σ k ). Moreover, we also consider some exotic intermediate examples like σ k = exp(− √ log k).
Local privacy and minimax bounds: Sharp rates for probability estimation
 In NIPS
"... We provide a detailed study of the estimation of probability distributions— discrete and continuous—in a stringent setting in which data is kept private even from the statistician. We give sharp minimax rates of convergence for estimation in these locally private settings, exhibiting fundamental tra ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We provide a detailed study of the estimation of probability distributions— discrete and continuous—in a stringent setting in which data is kept private even from the statistician. We give sharp minimax rates of convergence for estimation in these locally private settings, exhibiting fundamental tradeoffs between privacy and convergence rate, as well as providing tools to allow movement along the privacystatistical efficiency continuum. One of the consequences of our results is that Warner’s classical work on randomized response is an optimal way to perform survey sampling while maintaining privacy of the respondents. 1
Nearly optimal signal recovery from random projections: Universal encoding strategies?
 IEEE TRANS. INFO. THEORY
, 2006
"... Suppose we are given a vector f in a class F, e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision in the Euclidean (`2) metric? This paper shows that if the objects of interest are sparse in a fixed ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Suppose we are given a vector f in a class F, e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision in the Euclidean (`2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector jfj (or of its coefficients in a fixed basis) obeys jfj(n) R 1 n01=p, where R>0 and p>0. Suppose that we take measurements yk = hf; Xki;k =1;...;K, where the Xk are Ndimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0 < p < 1 and with overwhelming probability, our reconstruction f] , defined as the solution to the constraints
WORKING PAPER SERIES Adaptive Minimax Estimation over Sparse lqHulls
, 2012
"... Abstract: Given a dictionary of Mn initial estimates of the unknown true regression function, we aim to construct linearly aggregated estimators that target the best performance among all the linear combinations under a sparse qnorm (0 ≤ q ≤ 1) constraint on the linear coefficients. Besides ident ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: Given a dictionary of Mn initial estimates of the unknown true regression function, we aim to construct linearly aggregated estimators that target the best performance among all the linear combinations under a sparse qnorm (0 ≤ q ≤ 1) constraint on the linear coefficients. Besides identifying the optimal rates of aggregation for these `qaggregation problems, our multidirectional (or universal) aggregation strategies by model mixing or model selection achieve the optimal rates simultaneously over the full range of 0 ≤ q ≤ 1 for general Mn and upper bound tn of the qnorm. Both random and fixed designs, with known or unknown error variance, are handled, and the `qaggregations examined in this work cover major types of aggregation problems previously studied in the literature. Consequences on minimaxrate adaptive regression under `qconstrained true coefficients (0 ≤ q ≤ 1) are also provided. Our results show that the minimax rate of `qaggregation (0 ≤ q ≤ 1) is basically determined by an effective model size, which is a sparsity index that depends on q, tn, Mn, and the sample size n in an easily interpretable way based on a classical model selection theory that deals with a large number of models. In addition, in the fixed design case, the model
Quantization and Compressive Sensing
, 2015
"... Quantization is an essential step in digitizing signals, and, therefore, an indispensable component of any modern acquisition system. This chapter explores the interaction of quantization and compressive sensing and examines practical quantization strategies for compressive acquisition systems. Spec ..."
Abstract
 Add to MetaCart
Quantization is an essential step in digitizing signals, and, therefore, an indispensable component of any modern acquisition system. This chapter explores the interaction of quantization and compressive sensing and examines practical quantization strategies for compressive acquisition systems. Specifically, we first provide a brief overview of quantization and examine fundamental performance bounds applicable to any quantization approach. Next, we consider several forms of scalar quantizers, namely uniform, nonuniform, and 1bit. We provide performance bounds and fundamental analysis, as well as practical quantizer designs and reconstruction algorithms that account for quantization. Furthermore, we provide an overview of SigmaDelta () quantization in the compressed sensing context, and also discuss implementation issues, recovery algorithms and performance bounds. As we demonstrate, proper accounting for quantization and careful quantizer design has significant impact in the performance of a compressive acquisition system.