Results 1  10
of
11
Tight conditions for consistent variable selection in high dimensional nonparametric regression
"... ..."
(Show Context)
ASYMPTOTIC EQUIVALENCE AND ADAPTIVE ESTIMATION FOR ROBUST NONPARAMETRIC REGRESSION
, 2009
"... Asymptotic equivalence theory developed in the literature so far are only for bounded loss functions. This limits the potential applications of the theory because many commonly used loss functions in statistical inference are unbounded. In this paper we develop asymptotic equivalence results for rob ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
Asymptotic equivalence theory developed in the literature so far are only for bounded loss functions. This limits the potential applications of the theory because many commonly used loss functions in statistical inference are unbounded. In this paper we develop asymptotic equivalence results for robust nonparametric regression with unbounded loss functions. The results imply that all the Gaussian nonparametric regression procedures can be robustified in a unified way. A key step in our equivalence argument is to bin the data and then take the median of each bin. The asymptotic equivalence results have significant practical implications. To illustrate the general principles of the equivalence argument we consider two important nonparametric inference problems: robust estimation of the regression function and the estimation of a quadratic functional. In both cases easily implementable procedures are constructed and are shown to enjoy simultaneously a high degree of robustness and adaptivity. Other problems such as construction of confidence sets and nonparametric hypothesis testing can be handled in a similar fashion.
Minimax and Adaptive Inference in Nonparametric Function Estimation
"... Abstract. Since Stein’s 1956 seminal paper, shrinkage has played a fundamental role in both parametric and nonparametric inference. This article discusses minimaxity and adaptive minimaxity in nonparametric function estimation. Three interrelated problems, function estimation under global integrated ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. Since Stein’s 1956 seminal paper, shrinkage has played a fundamental role in both parametric and nonparametric inference. This article discusses minimaxity and adaptive minimaxity in nonparametric function estimation. Three interrelated problems, function estimation under global integrated squared error, estimation under pointwise squared error, and nonparametric confidence intervals, are considered. Shrinkage is pivotal in the development of both the minimax theory and the adaptation theory. While the three problems are closely connected and the minimax theories bear some similarities, the adaptation theories are strikingly different. For example, in a sharp contrast to adaptive point estimation, in many common settings there do not exist nonparametric confidence intervals that adapt to the unknown smoothness of the underlying function. A concise account of these theories is given. The connections as well as differences among these problems are discussed and illustrated through examples. Key words and phrases: Adaptation, adaptive estimation, Bayes minimax,
Quadratic functional estimation in inverse problems
, 2009
"... We consider in this paper a Gaussian sequence model of observations Yi, i ≥ 1 having mean (or signal) θi and variance σi which is growing polynomially like i γ, γ> 0. This model describes a large panel of inverse problems. We estimate the quadratic functional of the unknown signal ∑ i≥1 θ2 i when ..."
Abstract
 Add to MetaCart
(Show Context)
We consider in this paper a Gaussian sequence model of observations Yi, i ≥ 1 having mean (or signal) θi and variance σi which is growing polynomially like i γ, γ> 0. This model describes a large panel of inverse problems. We estimate the quadratic functional of the unknown signal ∑ i≥1 θ2 i when the signal belongs to ellipsoids of both finite smoothness functions (polynomial weights iα, α> 0) and infinite smoothness (exponential weights eβir, β> 0, 0 < r ≤ 2). We propose a Pinsker type projection estimator in each case and study its quadratic risk. When the signal is sufficiently smoother than the difficulty of the inverse problem (α> γ + 1/4 or in the case of exponential weights), we obtain the parametric rate and the efficiency constant associated to it. Moreover, we give upper bounds of the second order term in the risk and conjecture that they are asymptotically sharp minimax. When the signal is finitely smooth with α ≤ γ + 1/4, we compute non parametric upper bounds of the risk of and we presume also that the constant is asymptotically sharp.
unknown title
"... Tradeoffs between global and local risks in nonparametric function estimation ..."
Abstract
 Add to MetaCart
Tradeoffs between global and local risks in nonparametric function estimation
NONQUADRATIC ESTIMATORS OF A QUADRATIC FUNCTIONAL1
"... Estimation of a quadratic functional over parameter spaces that are not quadratically convex is considered. It is shown, in contrast to the theory for quadratically convex parameter spaces, that optimal quadratic rules are often rate suboptimal. In such cases minimax rate optimal procedures are con ..."
Abstract
 Add to MetaCart
(Show Context)
Estimation of a quadratic functional over parameter spaces that are not quadratically convex is considered. It is shown, in contrast to the theory for quadratically convex parameter spaces, that optimal quadratic rules are often rate suboptimal. In such cases minimax rate optimal procedures are constructed based on local thresholding. These nonquadratic procedures are sometimes fully efficient even when optimal quadratic rules have slow rates of convergence. Moreover, it is shown that when estimating a quadratic functional nonquadratic procedures may exhibit different elbow phenomena than quadratic procedures. 1. Introduction. The Gaussian
Submitted arXiv:math.ST/1106.4293v2 TIGHT CONDITIONS FOR CONSISTENCY OF VARIABLE SELECTION IN THE CONTEXT OF HIGH DIMENSIONALITY
, 2012
"... We address the issue of variable selection in the regression model with very high ambient dimension, i.e., when the number of variables is very large. The main focus is on the situation where the number of relevant variables, called intrinsic dimension and denoted by d ∗ , is much smaller than the a ..."
Abstract
 Add to MetaCart
(Show Context)
We address the issue of variable selection in the regression model with very high ambient dimension, i.e., when the number of variables is very large. The main focus is on the situation where the number of relevant variables, called intrinsic dimension and denoted by d ∗ , is much smaller than the ambient dimension d. Without assuming any parametric form of the underlying regression function, we get tight conditions making it possible to consistently estimate the set of relevant variables. These conditions relate the intrinsic dimension to the ambient dimension and to the sample size. The procedure that is provably consistent under these tight conditions is based on comparing quadratic functionals of the empirical Fourier coefficients with appropriately chosen threshold values. The asymptotic analysis reveals the presence of two quite different regimes. The first regime is when d ∗ is fixed. In this case the situation in nonparametric regression is the same as in linear regression, i.e., consistent variable selection is possible if and only if logd is small compared to the sample size n. The picture is different in the second regime, d ∗ →∞as n→∞, where we prove that consistent variable selection in nonparametric setup is possible only if d ∗ + loglogd is small compared to logn. We apply these results to derive minimax separation rates for the problem of variable selection. 1. Introduction. Realworld