Results 1  10
of
11
Asymptotic equivalence of functional linear regression and a white noise inverse problem
 Ann. Statist
"... ar ..."
(Show Context)
The Practice of Non Parametric Estimation by Solving Inverse Problems: The Example of Transformation Models
"... This paper considers a semiparametric version of the transformation model ϕ(Y) = β ′ X + U under exogeneity or instrumental variables assumptions (E(UX) = 0 or E(Uinstruments) = 0). This model is used as an example to illustrate the practice of the estimation by solving linear functional equati ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
This paper considers a semiparametric version of the transformation model ϕ(Y) = β ′ X + U under exogeneity or instrumental variables assumptions (E(UX) = 0 or E(Uinstruments) = 0). This model is used as an example to illustrate the practice of the estimation by solving linear functional equations. This paper is specially focused on the data driven selection of the regularization parameter and of the bandwidths. Simulations experiments illustrate the relevance of this approach.
Adaptive nonparametric instrumental regression by model selection. arxiv:1003.3128v1, Université Catholique de Louvain
, 2010
"... We consider the problem of estimating the structural function in nonparametric instrumental regression, where in the presence of an instrument W a response Y is modeled in dependence of an endogenous explanatory variable Z. The proposed estimator is based on dimension reduction and additional thres ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We consider the problem of estimating the structural function in nonparametric instrumental regression, where in the presence of an instrument W a response Y is modeled in dependence of an endogenous explanatory variable Z. The proposed estimator is based on dimension reduction and additional thresholding. The minimax optimal rate of convergence of the estimator is derived assuming that the structural function belongs to some ellipsoids which are in a certain sense linked to the conditional expectation operator of Z given W. We illustrate these results by considering classical smoothness assumptions. However, the proposed estimator requires an optimal choice of a dimension parameter depending on certain characteristics of the unknown structural function and the conditional expectation operator of Z given W, which are not known in practice. The main issue addressed in our work is a fully adaptive choice of this dimension parameter using a model selection approach under the restriction that the conditional expectation operator of Z given W is smoothing in a certain sense. In this situation we develop a penalized minimum contrast estimator with randomized penalty and collection of models. We show that this datadriven estimator can attain the lower risk bound up to a constant over a wide range of smoothness classes for the structural function.
Minimax adaptive tests for the Functional Linear model
"... Abstract:. We introduce two novel procedures to test the nullity of the slope function in the functional linear model with real output. The test statistics combine multiple testing ideas and random projections of the input data through functional Principal Component Analysis. Interestingly, the proc ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract:. We introduce two novel procedures to test the nullity of the slope function in the functional linear model with real output. The test statistics combine multiple testing ideas and random projections of the input data through functional Principal Component Analysis. Interestingly, the procedures are completely datadriven and do not require any prior knowledge on the smoothness of the slope nor on the smoothness of the covariate functions. The levels and powers against local alternatives are assessed in a nonasymptotic setting. This allows us to prove that these procedures are minimax adaptive (up to an unavoidable log log n multiplicative term) to the unknown regularity of the slope. As a side result, the minimax separation distances of the slope are derived for a large range of regularity classes. A numerical study illustrates these theoretical results.
On rate optimal local estimation in functional linear model
, 2009
"... We consider the problem of estimating for a given representer h the value ℓh(β) of a linear functional of the slope parameter β in functional linear regression, where scalar responses Y1,..., Yn are modeled in dependence of random functions X1,..., Xn. The proposed estimators of ℓh(β) are based on d ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We consider the problem of estimating for a given representer h the value ℓh(β) of a linear functional of the slope parameter β in functional linear regression, where scalar responses Y1,..., Yn are modeled in dependence of random functions X1,..., Xn. The proposed estimators of ℓh(β) are based on dimension reduction and additional thresholding. The minimax optimal rate of convergence of the estimator is derived assuming that the slope parameter and the representer belong to some ellipsoid which are in a certain sense linked to the covariance operator associated to the regressor. We illustrate these results by considering Sobolev ellipsoids and finitely or infinitely smoothing covariance operator.
Adaptive Global Testing for Functional Linear Models
, 2013
"... This paper studies global testing of the slope function in functional linear regression models. A major challenge in functional global testing is to choose the dimension of projection when approximating the functional regression model by a finite dimensional multivariate linear regression model. We ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper studies global testing of the slope function in functional linear regression models. A major challenge in functional global testing is to choose the dimension of projection when approximating the functional regression model by a finite dimensional multivariate linear regression model. We develop a new method that simultaneously tests the slope vectors in a sequence of functional principal components regression models. The sequence of models being tested is determined by the sample size and is an integral part of the testing procedure. Our theoretical analysis shows that the proposed method is uniformly powerful over a class of smooth alternatives when the signal to noise ratio exceeds the detection boundary. The methods and results reflect the deep connection between the functional linear regression model and the Gaussian sequence model. We also present an extensive simulation study and a real data example to illustrate the finite sample performance of our method.
Nonasymptotic Adaptive Prediction in Functional Linear Models
, 2013
"... Functional linear regression has recently attracted considerable interest. Many works focus on asymptotic inference. In this paper we consider in a non asymptotic framework a simple estimation procedure based on functional Principal Regression. It revolves in the minimization of a least square contr ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Functional linear regression has recently attracted considerable interest. Many works focus on asymptotic inference. In this paper we consider in a non asymptotic framework a simple estimation procedure based on functional Principal Regression. It revolves in the minimization of a least square contrast coupled with a classical projection on the space spanned by the m first empirical eigenvectors of the covariance operator of the functional sample. The novelty of our approach is to select automatically the crucial dimension m by minimization of a penalized least square contrast. Our method is based on model selection tools. Yet, since this kind of methods consists usually in projecting onto known nonrandom spaces, we need to adapt it to empirical eigenbasis made of datadependent – hence random – vectors. The resulting estimator is fully adaptive and is shown to verify an oracle inequality for the risk associated to the prediction error and to attain optimal minimax rates of convergence over a certain class of ellipsoids. Our strategy of model selection is finally compared numerically with crossvalidation.