Results 1  10
of
16
Root n Consistent and Optimal Density Estimators for Moving Average Processes
"... The marginal density of a first order moving average process can be written as convolution of two innovation densities. Saavedra and Cao (2000) propose to estimate the marginal density by plugging in kernel density estimators for the innovation densities, based on estimated innovations. They obtain ..."
Abstract

Cited by 21 (17 self)
 Add to MetaCart
The marginal density of a first order moving average process can be written as convolution of two innovation densities. Saavedra and Cao (2000) propose to estimate the marginal density by plugging in kernel density estimators for the innovation densities, based on estimated innovations. They obtain that for an appropriate choice of bandwidth the variance of their estimator decreases at the rate 1/n. Their estimator can be interpreted as a specific Ustatistic. We suggest a slightly simpli ed Ustatistic as estimator of the marginal density, prove that it is asymptotically normal at the same rate, and describe the asymptotic variance explicitly. We show that the estimator is asymptotically efficient if no structural assumptions are made on the innovation density. For innovation densities known to have mean zero or to be symmetric, we describe improvements of our estimator which are again asymptotically efficient.
Estimating Linear Functionals of the Error Distribution in Nonparametric Regression
"... This paper addresses estimation of linear functionals of the error distribution in nonparametric regression models. It derives an i.i.d. representation for the empirical estimator based on residuals, using undersmoothed estimators for the regression curve. Asymptotic eciency of the estimator is p ..."
Abstract

Cited by 17 (9 self)
 Add to MetaCart
This paper addresses estimation of linear functionals of the error distribution in nonparametric regression models. It derives an i.i.d. representation for the empirical estimator based on residuals, using undersmoothed estimators for the regression curve. Asymptotic eciency of the estimator is proved. Estimation of the error variance is discussed in detail.
Estimating the Innovation Distribution in Nonlinear Autoregressive Models
 Department of Mathematics, University of Siegen. http://www.math.unisiegen.de/statistik/wefelmeyer.html
"... The usual estimator for the expectation of a function under the innovation distribution of a nonlinear autoregressive model is the empirical estimator based on estimated innovations. It can be improved by exploiting that the innovation distribution has mean zero. We show that the resulting estimator ..."
Abstract

Cited by 17 (17 self)
 Add to MetaCart
The usual estimator for the expectation of a function under the innovation distribution of a nonlinear autoregressive model is the empirical estimator based on estimated innovations. It can be improved by exploiting that the innovation distribution has mean zero. We show that the resulting estimator is efficient if the innovations are estimated with an efficient estimator for the autoregression parameter. Efficiency of this estimator is necessary except when the expectation of the function can be estimated adaptively. Analogous results hold for heteroscedastic models.
Estimating invariant laws of linear processes by Ustatistics
"... Suppose we observe an invertible linear process with independent mean zero innovations, and with coefficients depending on a finitedimensional... ..."
Abstract

Cited by 11 (10 self)
 Add to MetaCart
Suppose we observe an invertible linear process with independent mean zero innovations, and with coefficients depending on a finitedimensional...
Improved estimators for constrained Markov chain models
"... Suppose we observe an ergodic Markov chain and know that the stationary law of one or two successive observations fullls a linear constraint. We show how to improve given estimators exploiting this knowledge, and prove that the best of these estimators is efficient. ..."
Abstract

Cited by 7 (7 self)
 Add to MetaCart
Suppose we observe an ergodic Markov chain and know that the stationary law of one or two successive observations fullls a linear constraint. We show how to improve given estimators exploiting this knowledge, and prove that the best of these estimators is efficient.
Asymptotics for nonparametric regression
 Sankhyā A
, 1993
"... the error distribution function in ..."
EFFICIENT PREDICTION FOR LINEAR AND NONLINEAR AUTOREGRESSIVE MODELS
, 2006
"... Conditional expectations given past observations in stationary time series are usually estimated directly by kernel estimators, or by plugging in kernel estimators for transition densities. We show that, for linear and nonlinear autoregressive models driven by independent innovations, appropriate sm ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
Conditional expectations given past observations in stationary time series are usually estimated directly by kernel estimators, or by plugging in kernel estimators for transition densities. We show that, for linear and nonlinear autoregressive models driven by independent innovations, appropriate smoothed and weighted von Mises statistics of residuals estimate conditional expectations at better parametric rates and are asymptotically efficient. The proof is based on a uniform stochastic expansion for smoothed and weighted von Mises processes of residuals. We consider, in particular, estimation of conditional distribution functions and of conditional quantile functions.
Improved density estimators for invertible linear processes
 Communications in Statistics, Theory & Methods
, 2009
"... Key Words: Convolution estimator; plugin estimator; local Ustatistic; empirical likelihood for dependent data; empirical likelihood with infinitely many constraints; infiniteorder moving average process; infiniteorder autoregressive process. The stationary density of a centered invertible linea ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Key Words: Convolution estimator; plugin estimator; local Ustatistic; empirical likelihood for dependent data; empirical likelihood with infinitely many constraints; infiniteorder moving average process; infiniteorder autoregressive process. The stationary density of a centered invertible linear processes can be represented as a convolution of innovationbased densities, and it can be estimated at the parametric rate by plugging residualbased kernel estimators into the convolution representation. We have shown elsewhere that a functional central limit theorem holds both in the space of continuous functions vanishing at infinity, and in weighted L1spaces. Here we show that we can improve the plugin estimator considerably, exploiting the information that the innovations are centered, and replacing the kernel estimators by weighted versions, using the empirical likelihood approach. 1
Estimators for Models with Constraints Involving Unknown Parameters
"... Suppose we have independent observations from a distribution which we know to fulll a finitedimensional linear constraint involving an unknown finitedimensional parameter. We construct efficient estimators for finitedimensional functionals of the distribution. The estimators are obtained by first ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Suppose we have independent observations from a distribution which we know to fulll a finitedimensional linear constraint involving an unknown finitedimensional parameter. We construct efficient estimators for finitedimensional functionals of the distribution. The estimators are obtained by first constructing an efficient estimator for the functional when the parameter is known, and then replacing the parameter by an efficient estimator. We consider in particular estimation of expectations.
Improved estimators for constrained Markov chain models
, 2001
"... Suppose we observe an ergodic Markov chain and know that the stationary law of one or two successive observations ful0lls a linear constraint. We show how to improve the given estimators exploiting this knowledge, and prove that the best of these estimators is e2cient. c © 2001 Elsevier Science B.V. ..."
Abstract
 Add to MetaCart
Suppose we observe an ergodic Markov chain and know that the stationary law of one or two successive observations ful0lls a linear constraint. We show how to improve the given estimators exploiting this knowledge, and prove that the best of these estimators is e2cient. c © 2001 Elsevier Science B.V. All rights reserved MSC: primary 62M05; secondary 62G05; 62G20