#### DMCA

## 1-penalized quantile regression in highdimensional sparse models. Available at arXiv:0904.2931 (2009)

Citations: | 20 - 5 self |

### Citations

902 | Regression quantiles
- Koenker, Bassett
- 1978
(Show Context)
Citation Context ...ue model, uniformly over U . 1. Introduction. Quantile regression is an important statistical method for analyzing the impact of regressors on the conditional distribution of a response variable (cf. =-=[21, 23]-=-). It captures the heterogeneous impact of regressors on different parts of the distribution [8], exhibits robustness to outliers [19], has excellent computational properties [28], and has wide applic... |

865 | The Dantzig selector: statistical estimation when p is much larger than n
- Candès, Tao
(Show Context)
Citation Context ...ch larger than the sample size n. However, the number of significant regressors for each conditional quantile of interest is at most s, which is smaller than the sample size, that is, s = o(n). HDSMs =-=[7, 12, 26]-=- have emerged to deal with many new applications arising in biometrics, signal processing, machine learning, econometrics, and other areas of data analysis where high-dimensional data sets have become... |

697 |
Regression shrinkage and selection via the
- Tibshirani
- 1996
(Show Context)
Citation Context ... whose population coefficients are zero, thereby possibly restoring consistency. A penalization that has proven quite useful in least squares settings is the 1-penalty leading to the Lasso estimator =-=[30]-=-. 2.2. Penalized and post-penalized estimators. The 1-penalized quantile regression estimator β̂(u) is a solution to the following optimization problem: min β∈Rp Q̂u(β)+ λ √ u(1 − u) n p∑ j=1 σ̂j |βj... |

647 |
Weak convergence and empirical processes
- Vaart, Wellner
- 1996
(Show Context)
Citation Context ...j ]. For any u ∈ U , j ∈ {1, . . . , p}, we have by Lemma 1.5 in [24] that P(|Gn[(u − 1{ui ≤ u})xij /σ̂j ]| ≥ K̃) ≤ 2 exp(−K̃2/2). Hence, by the symmetrization lemma for probabilities, Lemma 2.3.7 in =-=[33]-=-, with K̃ ≥ 2√log 2 we have P ( > K̃ √ n|X) ≤ 4P ( sup u∈ U max 1≤j≤p |G o n[(u− 1{ui ≤ u})xij /σ̂j ]| > K̃/(4WU )|X ) (A.1) ≤ 4p max 1≤j≤pP ( sup u∈ U |Gon[(u− 1{ui ≤ u})xij /σ̂j ]| > K̃/(4WU )|X ) ... |

543 |
Tsitsiklis, Introduction to Linear Optimization, Athena Scientic
- Bertsimas, N
- 1997
(Show Context)
Citation Context ...lem (see Appendix C) with a dual version that is useful for analyzing the sparsity of the solution. When the solution is not unique, we define β̂(u) as any optimal basic feasible solution (see, e.g., =-=[6]-=-). Therefore, the problem (2.4) can be solved in polynomial time, avoiding the computational curse of dimensionality. Our goal is to derive the rate of convergence and model selection properties of th... |

333 |
Probability in Banach spaces. isoperimetry and processes, Ergebnisse der Mathematik und ihrer Grenzgebiete (3
- Ledoux, Talagrand
- 1991
(Show Context)
Citation Context ...ast 1 − 3γ − 3p−A2 (t) ≤ t ·CE · (1 + c0)A f 1/2κ0 √ s log(p ∨ [Lf 1/2κ0/t]) n . In order to prove the lemma, we use a combination of chaining arguments and exponential inequalities for contractions =-=[24]-=-. Our use of the contraction principle is inspired by its fundamentally innovative use in [32]; however, the use of the contraction principle alone is not sufficient in our case. Indeed, first we need... |

314 | Changes in the U.S. Wage Structure 1963-1987: Application of Quantile Regression." Econometrica 62 - Buchinsky - 1994 |

283 | Sure independence screening for ultra-highdimensional feature - Fan, Lv - 2008 |

262 | Asymptotics for Lasso-type estimators.
- Knight, Fu
- 2000
(Show Context)
Citation Context ... The overall penalty level λ √ u(1 − u) depends on each quantile index u, while λ will depend on the set U of quantile indices of interest. The 1-penalized quantile regression has been considered in =-=[18]-=- under small (fixed) p asymptotics. It is important to note that the penalized quantile regression problem (2.4) is equivalent to a linear programming problem (see Appendix C) with a dual version that... |

248 | Lasso-type recovery of sparse representations for high-dimensional data
- Meinshausen, Yu
(Show Context)
Citation Context ...ch larger than the sample size n. However, the number of significant regressors for each conditional quantile of interest is at most s, which is smaller than the sample size, that is, s = o(n). HDSMs =-=[7, 12, 26]-=- have emerged to deal with many new applications arising in biometrics, signal processing, machine learning, econometrics, and other areas of data analysis where high-dimensional data sets have become... |

188 | The sparsity and bias of the Lasso selection in highdimensional linear regression - Zhang, Huang |

171 | Sparsity oracle inequalities for the Lasso - Bunea, Tsybakov, et al. |

142 | Aggregation for Gaussian regression - Bunea, Tsybakov, et al. |

107 |
The Gaussian Hare and the Laplacian Tortoise: Computability of squared-error versus absolute-error estimators
- Portnoy, Koenker
- 1997
(Show Context)
Citation Context ...se variable (cf. [21, 23]). It captures the heterogeneous impact of regressors on different parts of the distribution [8], exhibits robustness to outliers [19], has excellent computational properties =-=[28]-=-, and has wide applicability [19]. The asymptotic theory for quantile regression has been developed under both a fixed number of regressors and an increasing number of regressors. The asymptotic theor... |

106 |
Théorie Analytique des Probabilités
- Laplace
(Show Context)
Citation Context ...ue model, uniformly over U . 1. Introduction. Quantile regression is an important statistical method for analyzing the impact of regressors on the conditional distribution of a response variable (cf. =-=[21, 23]-=-). It captures the heterogeneous impact of regressors on different parts of the distribution [8], exhibits robustness to outliers [19], has excellent computational properties [28], and has wide applic... |

96 |
Simultaneous analysis of lasso and dantzig
- Bickel, Ritov, et al.
- 2009
(Show Context)
Citation Context ...ch larger than the sample size n. However, the number of significant regressors for each conditional quantile of interest is at most s, which is smaller than the sample size, that is, s = o(n). HDSMs =-=[7, 12, 26]-=- have emerged to deal with many new applications arising in biometrics, signal processing, machine learning, econometrics, and other areas of data analysis where high-dimensional data sets have become... |

60 |
Regression rank scores and regression quantiles
- Gutenbrunner, Jurečková
- 1992
(Show Context)
Citation Context ... theory for quantile regression has been developed under both a fixed number of regressors and an increasing number of regressors. The asymptotic theory under a fixed number of regressors is given in =-=[13, 15, 17, 21, 27]-=- and others. The asymptotic theory under an increasing number of regressors is given in [16] and [1, 4], covering the case where the number of regressors p is negligible relative to the sample size n ... |

55 | Limiting distributions for L1 regression estimators under general conditions
- Knight
- 1998
(Show Context)
Citation Context ... theory for quantile regression has been developed under both a fixed number of regressors and an increasing number of regressors. The asymptotic theory under a fixed number of regressors is given in =-=[13, 15, 17, 21, 27]-=- and others. The asymptotic theory under an increasing number of regressors is given in [16] and [1, 4], covering the case where the number of regressors p is negligible relative to the sample size n ... |

48 |
High-dimensional generalized linear models and the
- Geer
- 2008
(Show Context)
Citation Context ...ata sets have become widely available. A number of papers have begun to investigate estimation of HDSMs, focusing primarily on penalized mean regression, with the 1-norm acting as a penalty function =-=[7, 12, 22, 26, 32, 34]-=-. References [7, 12, 22, 26, 34] demonstrated the fundamental result that 1-penalized least squares estimators achieve the rate√ s/n √ logp, which is very close to the oracle rate √ s/n achievable wh... |

45 |
Sparsity in penalized empirical risk minimization.
- Koltchinskii
- 2009
(Show Context)
Citation Context ...ata sets have become widely available. A number of papers have begun to investigate estimation of HDSMs, focusing primarily on penalized mean regression, with the 1-norm acting as a penalty function =-=[7, 12, 22, 26, 32, 34]-=-. References [7, 12, 22, 26, 34] demonstrated the fundamental result that 1-penalized least squares estimators achieve the rate√ s/n √ logp, which is very close to the oracle rate √ s/n achievable wh... |

32 | 2005), “Extremal quantile regression - Chernozhukov |

30 | Behavior of Regression Quantiles in Non-Stationary, Dependent Cases - Portnoy - 1991 |

26 | Taking advantage of sparsity in multi-task learning. Arxiv preprint arXiv:0903.1468, - Lounici, Pontil, et al. - 2009 |

24 | Sparse recovery under matrix uncertainty. - Rosenbaum, Tsybakov - 2010 |

13 |
Asymptotic Statistics. Cambridge Univ
- Vaart
- 1998
(Show Context)
Citation Context ... − δ, provided that N ∨ n∨ θ0 ≥ 3; the constant c is the same as in Lemma 16. 122 A. BELLONI AND V. CHERNOZHUKOV PROOF OF LEMMA 16. The strategy of the proof is similar to the proof of Lemma 19.34 in =-=[31]-=-, page 286, given for the expectation of a supremum of a process; here we instead bound tail probabilities and also compute all constants explicitly. Step 1. There exists a sequence of nested partitio... |

12 | On the computational complexity of MCMCbased estimators in large samples
- Belloni, Chernozhukov
(Show Context)
Citation Context ...f regressors. The asymptotic theory under a fixed number of regressors is given in [13, 15, 17, 21, 27] and others. The asymptotic theory under an increasing number of regressors is given in [16] and =-=[1, 4]-=-, covering the case where the number of regressors p is negligible relative to the sample size n [i.e., p = o(n)]. Received December 2009; revised April 2010. 1Supported by the NSF Grant SES-0752266. ... |

12 | Aggregation and sparsity via `1-penalized least squares - Bunea, Tsybakov, et al. - 2006 |

10 | Additive Models for Quantile Regression: Model Selection and Confidence Bandaids
- Koenker
- 2011
(Show Context)
Citation Context ... 1 − α is the confidence level in the sense that, as in [7], our (nonasymptotic) bounds on the estimation error will contract at the optimal rate with this probability. We refer the reader to Koenker =-=[20]-=- for an implementation of our choice of penalty level and practical suggestions concerning the confidence level. In particular, both here and in Koenker [20], the confidence level 1 − α = 0.9 gave goo... |

7 |
2010): "Conditional Quantile Processes under Increasing Dimension
- Belloni, Chernozhukov
(Show Context)
Citation Context ...f regressors. The asymptotic theory under a fixed number of regressors is given in [13, 15, 17, 21, 27] and others. The asymptotic theory under an increasing number of regressors is given in [16] and =-=[1, 4]-=-, covering the case where the number of regressors p is negligible relative to the sample size n [i.e., p = o(n)]. Received December 2009; revised April 2010. 1Supported by the NSF Grant SES-0752266. ... |

7 |
Post-`1-penalized estimators in high-dimensional linear regression models. arXiv:[math.ST
- Belloni, Chernozhukov
- 2009
(Show Context)
Citation Context ...ather exceptional case of perfect model selection, in which case the post-penalized estimator is simply the oracle. Building on the current work these results have been extended to mean regression in =-=[5]-=-. Our results on the sparsity of 1-QR and model selection also contribute to the analogous results for mean regression [26]. Also, our rate results for 1-QR are different from, and hence complementa... |

7 |
On parameters of increasing dimenions
- He, Shao
- 2000
(Show Context)
Citation Context ... number of regressors. The asymptotic theory under a fixed number of regressors is given in [13, 15, 17, 21, 27] and others. The asymptotic theory under an increasing number of regressors is given in =-=[16]-=- and [1, 4], covering the case where the number of regressors p is negligible relative to the sample size n [i.e., p = o(n)]. Received December 2009; revised April 2010. 1Supported by the NSF Grant SE... |

6 |
Quantile Regression. Cambridge Univ
- Koenker
- 2005
(Show Context)
Citation Context ...s on the conditional distribution of a response variable (cf. [21, 23]). It captures the heterogeneous impact of regressors on different parts of the distribution [8], exhibits robustness to outliers =-=[19]-=-, has excellent computational properties [28], and has wide applicability [19]. The asymptotic theory for quantile regression has been developed under both a fixed number of regressors and an increasi... |