Results 1  10
of
11
Root n Consistent and Optimal Density Estimators for Moving Average Processes
"... The marginal density of a first order moving average process can be written as convolution of two innovation densities. Saavedra and Cao (2000) propose to estimate the marginal density by plugging in kernel density estimators for the innovation densities, based on estimated innovations. They obtain ..."
Abstract

Cited by 21 (17 self)
 Add to MetaCart
The marginal density of a first order moving average process can be written as convolution of two innovation densities. Saavedra and Cao (2000) propose to estimate the marginal density by plugging in kernel density estimators for the innovation densities, based on estimated innovations. They obtain that for an appropriate choice of bandwidth the variance of their estimator decreases at the rate 1/n. Their estimator can be interpreted as a specific Ustatistic. We suggest a slightly simpli ed Ustatistic as estimator of the marginal density, prove that it is asymptotically normal at the same rate, and describe the asymptotic variance explicitly. We show that the estimator is asymptotically efficient if no structural assumptions are made on the innovation density. For innovation densities known to have mean zero or to be symmetric, we describe improvements of our estimator which are again asymptotically efficient.
Improved estimators for constrained Markov chain models
"... Suppose we observe an ergodic Markov chain and know that the stationary law of one or two successive observations fullls a linear constraint. We show how to improve given estimators exploiting this knowledge, and prove that the best of these estimators is efficient. ..."
Abstract

Cited by 7 (7 self)
 Add to MetaCart
Suppose we observe an ergodic Markov chain and know that the stationary law of one or two successive observations fullls a linear constraint. We show how to improve given estimators exploiting this knowledge, and prove that the best of these estimators is efficient.
Efficient Estimation in Invertible Linear Processes
"... An invertible causal linear process is a process which has infinite order moving average and autoregressive representations. We assume that the coefficients in these representations depend on a Euclidean parameter, while the corresponding innovations have an unknown centered distribution with some m ..."
Abstract

Cited by 7 (7 self)
 Add to MetaCart
An invertible causal linear process is a process which has infinite order moving average and autoregressive representations. We assume that the coefficients in these representations depend on a Euclidean parameter, while the corresponding innovations have an unknown centered distribution with some moment restrictions. We discuss efficient estimation of differentiable functionals in such a semiparametric model. For this we first obtain a suitable semiparametric version of local asymptotic normality and then use Hajek's convolution theorem to characterize efficient estimators. Then we apply this result to construct efficient estimators of the Euclidean parameter and of linear functionals of the innovation distribution.
EFFICIENT PREDICTION FOR LINEAR AND NONLINEAR AUTOREGRESSIVE MODELS
, 2006
"... Conditional expectations given past observations in stationary time series are usually estimated directly by kernel estimators, or by plugging in kernel estimators for transition densities. We show that, for linear and nonlinear autoregressive models driven by independent innovations, appropriate sm ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Conditional expectations given past observations in stationary time series are usually estimated directly by kernel estimators, or by plugging in kernel estimators for transition densities. We show that, for linear and nonlinear autoregressive models driven by independent innovations, appropriate smoothed and weighted von Mises statistics of residuals estimate conditional expectations at better parametric rates and are asymptotically efficient. The proof is based on a uniform stochastic expansion for smoothed and weighted von Mises processes of residuals. We consider, in particular, estimation of conditional distribution functions and of conditional quantile functions.
Some Developments in Semiparametric Statistics
"... In this paper we describe the historical development of some parts of semiparametric statistics. The emphasis is on efficient estimation. We understand semiparametric model in the general sense of a model that is neither parametric nor nonparametric. We restrict attention to models with independent ..."
Abstract
 Add to MetaCart
In this paper we describe the historical development of some parts of semiparametric statistics. The emphasis is on efficient estimation. We understand semiparametric model in the general sense of a model that is neither parametric nor nonparametric. We restrict attention to models with independent and identically distributed observations and to time series.
Improved estimators for constrained Markov chain models
, 2001
"... Suppose we observe an ergodic Markov chain and know that the stationary law of one or two successive observations ful0lls a linear constraint. We show how to improve the given estimators exploiting this knowledge, and prove that the best of these estimators is e2cient. c © 2001 Elsevier Science B.V. ..."
Abstract
 Add to MetaCart
Suppose we observe an ergodic Markov chain and know that the stationary law of one or two successive observations ful0lls a linear constraint. We show how to improve the given estimators exploiting this knowledge, and prove that the best of these estimators is e2cient. c © 2001 Elsevier Science B.V. All rights reserved MSC: primary 62M05; secondary 62G05; 62G20
0Efficient prediction for linear and nonlinear autoregressive models
"... Conditional expectations given past observations in stationary time series are usually estimated directly by kernel estimators, or by plugging in kernel estimators for transition densities. We show that for linear and nonlinear autoregressive models driven by independent innovations, appropriate smo ..."
Abstract
 Add to MetaCart
Conditional expectations given past observations in stationary time series are usually estimated directly by kernel estimators, or by plugging in kernel estimators for transition densities. We show that for linear and nonlinear autoregressive models driven by independent innovations, appropriate smoothed and weighted von Mises statistics of residuals estimate conditional expectations at better, parametric, rates and are asymptotically efficient. The proof is based on a uniform stochastic expansion for smoothed and weighted von Mises processes of residuals. We consider in particular estimation of conditional distribution functions and of conditional quantile functions.
CHAPTER 1 EFFICIENT ESTIMATORS FOR TIME SERIES
"... We illustrate several recent results on efficient estimation for semiparametric time series models with a simple class of models: firstorder nonlinear autoregression with independent innovations. We consider in particular estimation of the autoregression parameter, the innovation distribution, ..."
Abstract
 Add to MetaCart
We illustrate several recent results on efficient estimation for semiparametric time series models with a simple class of models: firstorder nonlinear autoregression with independent innovations. We consider in particular estimation of the autoregression parameter, the innovation distribution, conditional expectations, the stationary distribution, the stationary density, and higherorder transition densities. 1.
Universität Bremen
"... We illustrate several recent results on efficient estimation for semiparametric time series models with two types of AR(1) models: having independent and centered innovations, and having general and conditionally centered innovations. We consider in particular estimation of the autoregression param ..."
Abstract
 Add to MetaCart
We illustrate several recent results on efficient estimation for semiparametric time series models with two types of AR(1) models: having independent and centered innovations, and having general and conditionally centered innovations. We consider in particular estimation of the autoregression parameter, the stationary distribution, the innovation distribution, and the stationary density. 1