• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

DMCA

The Dantzig selector: statistical estimation when p is much larger than n (2005)

Cached

  • Download as a PDF

Download Links

  • [www.acm.caltech.edu]
  • [www.l1-magic.org]
  • [www-stat.stanford.edu]
  • [home.ustc.edu.cn]
  • [home.ustc.edu.cn]
  • [home.ustc.edu.cn]
  • [home.ustc.edu.cn]

  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Emmanuel Candes , Terence Tao
Citations:868 - 14 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@MISC{Candes05thedantzig,
    author = {Emmanuel Candes and Terence Tao},
    title = {The Dantzig selector: statistical estimation when p is much larger than n},
    year = {2005}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Ax + z, where x ∈ R p is a parameter vector of interest, A is a data matrix with possibly far fewer rows than columns, n ≪ p, and the zi’s are i.i.d. N(0, σ 2). Is it possible to estimate x reliably based on the noisy data y? To estimate x, we introduce a new estimator—we call the Dantzig selector—which is solution to the ℓ1-regularization problem min ˜x∈R p ‖˜x‖ℓ1 subject to ‖A T r‖ℓ ∞ ≤ (1 + t −1) √ 2 log p · σ, where r is the residual vector y − A˜x and t is a positive scalar. We show that if A obeys a uniform uncertainty principle (with unit-normed columns) and if the true parameter vector x is sufficiently sparse (which here roughly guarantees that the model is identifiable), then with very large probability ‖ˆx − x ‖ 2 ℓ2 ≤ C2 ( · 2 log p · σ 2 + ∑ min(x 2 i, σ 2) Our results are nonasymptotic and we give values for the constant C. In short, our estimator achieves a loss within a logarithmic factor of the ideal mean squared error one would achieve with an oracle which would supply perfect information about which coordinates are nonzero, and which were above the noise level. In multivariate regression and from a model selection viewpoint, our result says that it is possible nearly to select the best subset of variables, by solving a very simple convex program, which in fact can easily be recast as a convenient linear program (LP).

Keyphrases

dantzig selector    statistical estimation    residual vector    noise level    unit-normed column    simple convex program    positive scalar    data matrix    noisy data    uniform uncertainty principle    1-regularization problem min    perfect information    multivariate regression    many important statistical application    large probability    true parameter vector    ideal mean    new estimator    convenient linear program    parameter vector    model selection viewpoint    logarithmic factor   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University