Results 1  10
of
43
Kernel dimension reduction in regression
, 2006
"... Acknowledgements. The authors thank the editor and anonymous referees for their helpful comments. The authors also thank Dr. Yoichi Nishiyama for his helpful comments on the uniform convergence of empirical processes. We would like to acknowledge support from JSPS KAKENHI 15700241, ..."
Abstract

Cited by 51 (17 self)
 Add to MetaCart
(Show Context)
Acknowledgements. The authors thank the editor and anonymous referees for their helpful comments. The authors also thank Dr. Yoichi Nishiyama for his helpful comments on the uniform convergence of empirical processes. We would like to acknowledge support from JSPS KAKENHI 15700241,
Contour regression: A general approach to dimension reduction
 Ann. Statist
, 2005
"... ar ..."
(Show Context)
Rodeo: Sparse, greedy nonparametric regression. Ann. Statist
, 2008
"... We present a greedy method for simultaneously performing local bandwidth selection and variable selection in nonparametric regression. The method starts with a local linear estimator with large bandwidths, and incrementally decreases the bandwidth of variables for which the gradient of the estimat ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
We present a greedy method for simultaneously performing local bandwidth selection and variable selection in nonparametric regression. The method starts with a local linear estimator with large bandwidths, and incrementally decreases the bandwidth of variables for which the gradient of the estimator with respect to bandwidth is large. The method—called rodeo (regularization of derivative expectation operator)—conducts a sequence of hypothesis tests to threshold derivatives, and is easy to implement. Under certain assumptions on the regression function and sampling density, it is shown that the rodeo applied to local linear smoothing avoids the curse of dimensionality, achieving near optimal minimax rates of convergence in the number of relevant variables, as if these variables were isolated in advance. 1. Introduction. Estimating
A constructive approach to the estimation of dimension reduction Directions
, 2007
"... In this paper, we propose two new methods to estimate the dimensionreduction directions of the central subspace (CS) by constructing a regression model such that the directions are all captured in the regression mean. Compared with the inverse regression estimation methods (e.g. Li, 1991, 1992; Coo ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
In this paper, we propose two new methods to estimate the dimensionreduction directions of the central subspace (CS) by constructing a regression model such that the directions are all captured in the regression mean. Compared with the inverse regression estimation methods (e.g. Li, 1991, 1992; Cook and Weisberg, 1991), the new methods require no strong assumptions on the design of covariates or the functional relation between regressors and the response variable, and have better performance than the inverse regression estimation methods for finite samples. Compared with the direct regression estimation methods (e.g. Härdle and Stoker, 1989; Hristache, Juditski, Polzehl and Spokoiny, 2001; Xia, Tong, Li and Zhu, 2002), which can only estimate the directions of CS in the regression mean, the new methods can detect the directions of CS exhaustively. Consistency of the estimators and the convergence of corresponding algorithms are proved.
Structural adaptation via Lp–norm oracle inequalities. http://arxiv.org, arXiv:math.ST/0704.2492 imsart ver. 2006/03/07 file: AGR.tex date: March 5, 2009 of estimators 25
, 2007
"... In this paper we study the problem of adaptive estimation of a multivariate function satisfying some structural assumption. We propose a novel estimation procedure that adapts simultaneously to unknown structure and smoothness of the underlying function. The problem of structural adaptation is state ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
In this paper we study the problem of adaptive estimation of a multivariate function satisfying some structural assumption. We propose a novel estimation procedure that adapts simultaneously to unknown structure and smoothness of the underlying function. The problem of structural adaptation is stated as the problem of selection from a given collection of estimators. We develop a general selection rule and establish for it global oracle inequalities under arbitrary Lp–losses. These results are applied for adaptive estimation in the additive multi–index model.
Fourier Methods for Estimating The Central Subspace and The Central Mean Subspace in Regression
"... In high dimensional regression, it is important to estimate the central and central mean subspaces, to which the projections of the predictors preserve sufficient information about the response and the mean response, respectively. Using the Fourier transform, we have derived the candidate matrices w ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
In high dimensional regression, it is important to estimate the central and central mean subspaces, to which the projections of the predictors preserve sufficient information about the response and the mean response, respectively. Using the Fourier transform, we have derived the candidate matrices whose column spaces recover the central and central mean subspaces exhaustively. Under the normality assumption of the predictors, explicit estimates of the central and central mean subspaces are derived. Bootstrap procedures are used for determining dimensionality and choosing tuning parameters. Simulation results and an application to a real data are reported. Our methods demonstrate competitive performance compared to SIR, SAVE and other existing methods. The approach proposed in the paper provides a novel view on sufficient dimension reduction and may lead to more powerful tools in the future.
Endogeneity in Semiparametric Binary Random Coe¢ cient Models,Discussion paper
, 2008
"... In this paper we consider endogenous regressors in the binary choice model under a weak median exclusion restriction, but without further specification of the distribution of the unobserved random components. As a particularly relevant example for a structural model where no semiparametric estimator ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
In this paper we consider endogenous regressors in the binary choice model under a weak median exclusion restriction, but without further specification of the distribution of the unobserved random components. As a particularly relevant example for a structural model where no semiparametric estimator has of yet been analyzed, we consider the binary random coefficients model with endogenous regressors. However, many of the arguments we make hold more generally in all endogenous binary choice models with heteroscedasticity. We focus on the estimation of a centrality parameter β, because even in random coefficient models usually an average effect and not the entire distribution of coefficients is of interest. We use a control function IV assumption to identify a centrality parameter that has the interpretation of a local average structural effect of the regressor on the latent variable, and establish identification based on the mean ratio of derivatives of two functions of the instruments. We propose an estimator based on sample counterparts, and discuss the large sample behavior. In particular, we show n consistency and derive the asymptotic distribution. In the same framework, we propose tests for heteroscedasticity, overidentification and endogeneity. We analyze the small sample performance through a simulation study. An application of the model to IO demand data concludes this paper.
A new algorithm for estimating the effective dimensionreduction subspace
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2008
"... The statistical problem of estimating the effective dimensionreduction (EDR) subspace in the multiindex regression model with deterministic design and additive noise is considered. A new procedure for recovering the directions of the EDR subspace is proposed. Many methods for estimating the EDR su ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
The statistical problem of estimating the effective dimensionreduction (EDR) subspace in the multiindex regression model with deterministic design and additive noise is considered. A new procedure for recovering the directions of the EDR subspace is proposed. Many methods for estimating the EDR subspace perform principal component analysis on a family of vectors, say ˆβ1,..., ˆ βL, nearly lying in the EDR subspace. This is in particular the case for the structureadaptive approach proposed by Hristache et al. (2001a). In the present work, we propose to estimate the projector onto the EDR subspace by the solution to the optimization problem minimize max ˆβ ℓ=1,...,L ⊤ ℓ (I − A) ˆ βℓ subject to A ∈ Am∗, where Am ∗ is the set of all symmetric matrices with eigenvalues in [0,1] and trace less than or equal to m ∗ , with m ∗ being the true structural dimension. Under mild assumptions, √ nconsistency of the proposed procedure is proved (up to a logarithmic factor) in the case when the structural dimension is not larger than 4. Moreover, the stochastic error of the estimator of the projector onto the EDR subspace is shown to depend on L logarithmically. This enables us to use a large number of vectors ˆβℓ for estimating the EDR subspace. The empirical behavior of the algorithm is studied through numerical simulations.
Rodeo: Sparse nonparametric regression in high dimensions
 in Advances in Neural Information Processing Systems (NIPS
, 2005
"... We present a method for simultaneously performing bandwidth selection and variable selection in nonparametric regression. The method starts with a local linear estimator with large bandwidths, and incrementally decreases the bandwidth in directions where the gradient of the estimator with respect to ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
We present a method for simultaneously performing bandwidth selection and variable selection in nonparametric regression. The method starts with a local linear estimator with large bandwidths, and incrementally decreases the bandwidth in directions where the gradient of the estimator with respect to bandwidth is large. When the unknown function satisfies a sparsity condition, the approach avoids the curse of dimensionality. The method—called rodeo (regularization of derivative expectation operator)—conducts a sequence of hypothesis tests, and is easy to implement. A modified version that replaces testing with soft thresholding may be viewed as solving a sequence of lasso problems. When applied in one dimension, the rodeo yields a method for choosing the locally optimal bandwidth.
BAYESIAN ESTIMATION IN SINGLEINDEX MODELS
"... Abstract: Singleindex models offer a flexible semiparametric regression framework for highdimensional predictors. Bayesian methods have never been proposed for such models. We develop a Bayesian approach incorporating some frequentist methods: Bsplines approximate the link function, the prior on ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Abstract: Singleindex models offer a flexible semiparametric regression framework for highdimensional predictors. Bayesian methods have never been proposed for such models. We develop a Bayesian approach incorporating some frequentist methods: Bsplines approximate the link function, the prior on the index vector is Fishervon Mises, and regularization with generalized cross validation is adopted to avoid overfitting the link function. A random walk Metropolis algorithm is used to sample from the posterior. Simulation results indicate that our procedure provides some improvement over the best frequentist method available. Two data examples are included.