Results 11  20
of
41
Asymptotic Optimality Of Regular Sequence Designs
 Ann. Statist
, 1995
"... . We study linear estimators for the weighted integral of a stochastic process. The process may only be observed on a finite sampling design. The error is defined in mean square sense, and the process is assumed to satisfy SacksYlvisaker regularity conditions of order r 2 N 0 . We show that samplin ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
. We study linear estimators for the weighted integral of a stochastic process. The process may only be observed on a finite sampling design. The error is defined in mean square sense, and the process is assumed to satisfy SacksYlvisaker regularity conditions of order r 2 N 0 . We show that sampling at the quantiles of a particular density already yields asymptotically optimal estimators. Hereby we extend results by Sacks and Ylvisaker for regularity r = 0 or 1, and we confirm a conjecture by Eubank, Smith, and Smith. 1. Introduction Let X(t), t 2 [0; 1], be a centered stochastic process which is at least continuous in quadratic mean. For a known function ae 2 L 2 ([0; 1]) we want to estimate the weighted integral Int ae (X) = Z 1 0 X(t) \Delta ae(t) dt: We consider linear estimators I n which are based on n observations of X. Hence I n (X) = n X i=1 X(t i ) \Delta a i with sampling points 0 t 1 ! \Delta \Delta \Delta ! t n 1 and coefficients a i 2 R. The error of I n is de...
On oracle inequalities related to smoothing splines
 Math. Methods Statist
"... Smoothing splines provide very efficient algorithms for univariate regression estimation. When an unknown regression function is estimated with the help of smoothing splines, the principal problem is related to statistical properties of datadriven methods for choosing a smoothing spline parameter. ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Smoothing splines provide very efficient algorithms for univariate regression estimation. When an unknown regression function is estimated with the help of smoothing splines, the principal problem is related to statistical properties of datadriven methods for choosing a smoothing spline parameter. This paper focuses on the unbiased risk estimation and the generalized cross validation and for these techniques we derive the socalled oracle inequalities controlling the performance of splines with the datadriven smoothing parameter. Key words: splines, mean square risk, oracle inequality, unbiased risk estimation, generalized cross validation.
Asymptotically Minimax Estimation of a Function With Jumps
, 1997
"... Asymptotically minimax nonparametric estimation of a regression function observed in white Gaussian noise over a bounded interval is considered, with respect to a L 2 loss function. The unknown function f is assumed to be m times differentiable except for an unknown, though finite, number of jumps, ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Asymptotically minimax nonparametric estimation of a regression function observed in white Gaussian noise over a bounded interval is considered, with respect to a L 2 loss function. The unknown function f is assumed to be m times differentiable except for an unknown, though finite, number of jumps, with piecewise mth derivative bounded in L 2 norm. An estimator is constructed, attaining the same optimal risk bound, known as Pinsker's constant, as in the case of smooth functions (without jumps). Key words: Jumppoint estimation; Nonparametric regression; Optimal constant; Tapered orthogonal series estimator 1 Introduction In the eighties optimal rates of convergence in nonparametric regression estimation problems have been thoroughly examined, following the book of Ibragimov and Hasminskii (1981), the groundbreaking papers due to Stone (1982) and Birg'e (1983) and others. Later the interest has shifted to finding not only the optimal rates, but also the asymptotic optimal constants...
Smoothing parameter selection in two frameworks for penalized splines
 J. R. Statist. Soc. B
, 2013
"... There are two popular smoothing parameter selection methods for spline smoothing. First, smoothing parameters can be estimated minimizing criteria that approximate the average mean squared error of the regression function estimator. Second, the maximum likelihood paradigm can be employed, under the ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
There are two popular smoothing parameter selection methods for spline smoothing. First, smoothing parameters can be estimated minimizing criteria that approximate the average mean squared error of the regression function estimator. Second, the maximum likelihood paradigm can be employed, under the assumption that the regression function is a realization of some stochastic process. In this article the asymptotic properties of both smoothing parameter estimators for penalized splines are studied and compared. A simulation study and a real data example illustrate the theoretical findings.
A Locally Adaptive Penalty for Estimation of Functions with Varying Roughness.
, 2008
"... We propose a new regularization method called LocoSpline for nonparametric function estimation. LocoSpline uses a penalty which is data driven and locally adaptive. This allows for more flexible estimation of the function in regions of the domain where it has more curvature, without over fitting ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We propose a new regularization method called LocoSpline for nonparametric function estimation. LocoSpline uses a penalty which is data driven and locally adaptive. This allows for more flexible estimation of the function in regions of the domain where it has more curvature, without over fitting in regions that have little curvature. This methodology is also transferred into higher dimensions via the Smoothing Spline ANOVA framework. General conditions for optimal MSE rate of convergence are given and the LocoSpline is shown to achieve this rate. In our simulation study, the LocoSpline substantially outperforms the traditional smoothing spline and the locally adaptive kernel smoother.
Semiparametric Regression Pursuit
, 2010
"... 1 Summary. The semiparametric partially linear model allows flexible modeling of covariate effects on the response variable in regression. It combines the flexibility of nonparametric regression and parsimony of linear regression. The most important assumption in the existing approaches for the esti ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
1 Summary. The semiparametric partially linear model allows flexible modeling of covariate effects on the response variable in regression. It combines the flexibility of nonparametric regression and parsimony of linear regression. The most important assumption in the existing approaches for the estimation in this model is to assume a priori that it is known which covariates have a linear effect and which do not. However, in applied work, this is rarely known in advance. We consider the problem of estimation in the partially linear models without assuming a priori which covariates have linear effects. We propose a semiparametric model pursuit method for identifying the covariates with a linear effect. Our proposed method is a penalized regression approach using a group minimax concave penalty. Under suitable conditions we show that the proposed approach is modelpursuit consistent, meaning that it can correctly determine which covariates have a linear effect and which do not with high probability. The performance of the proposed method is evaluated using simulation studies, which support our theoretical results. A real data example is used to illustrated the application of the proposed method. Keywords. Group selection; Minimax concave penalty; Modelpursuit consistency; Penalized regression; Semiparametric models.
Computing the equivalent number of parameters of fixedinterval smoothers
"... The problem of reconstructing an unknown signal from n noisy samples can be addressed by means of nonparametric estimation techniques such as Tikhonov regularization, Bayesian regression and statespace fixedinterval smoothing. The practical use of these approaches calls for the tuning of a regu ..."
Abstract
 Add to MetaCart
(Show Context)
The problem of reconstructing an unknown signal from n noisy samples can be addressed by means of nonparametric estimation techniques such as Tikhonov regularization, Bayesian regression and statespace fixedinterval smoothing. The practical use of these approaches calls for the tuning of a regularization parameter that controls the amount of smoothing they introduce. The leading tuning criteria, including Generalized Cross Validation and Maximum Likelihood, involve the repeated computation of the socalled equivalent number of parameters, a normalized measure of the flexibility of the nonparametric estimator. The paper develops new statespace formulas for the computation of the equivalent number of parameters inO(n) operations. The results are specialized to the case of uniform sampling yielding closedform expressions of the equivalent number of parameters for both linear splines and firstorder deconvolution.