Results 1  10
of
19
On the Estimation of the Marginal Density of a Moving Average Process
"... The authors present a new convolutiontype kernel estimator of the marginal density of an MA(1) process with general error distribution. They prove the # nconsistency of the nonparametric estimator and give asymptotic expressions for the mean square and the integrated mean square error of some unob ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
The authors present a new convolutiontype kernel estimator of the marginal density of an MA(1) process with general error distribution. They prove the # nconsistency of the nonparametric estimator and give asymptotic expressions for the mean square and the integrated mean square error of some unobservable version of the estimator. An extension to MA(q) processes is presented in the case of the mean integrated square error. Finally, a simulation study shows the good practical behaviour of the estimator and the strong connection between the estimator and its unobservable version in terms of the choice of the bandwidth. R ESUM E Les auteurs montrent comment estimer par la methode du noyau la densite marginale d'un processus de moyenne mobile MA(1) dont la loi des erreurs est quelconque. Ils demontrent la convergence d'ordre # n de cet estimateur non parametrique de type convolution et donnent, pour une version nonobservable dudit estimateur, des expressions asymptotiques pour les err...
Root n Consistent and Optimal Density Estimators for Moving Average Processes
"... The marginal density of a first order moving average process can be written as convolution of two innovation densities. Saavedra and Cao (2000) propose to estimate the marginal density by plugging in kernel density estimators for the innovation densities, based on estimated innovations. They obtain ..."
Abstract

Cited by 21 (17 self)
 Add to MetaCart
The marginal density of a first order moving average process can be written as convolution of two innovation densities. Saavedra and Cao (2000) propose to estimate the marginal density by plugging in kernel density estimators for the innovation densities, based on estimated innovations. They obtain that for an appropriate choice of bandwidth the variance of their estimator decreases at the rate 1/n. Their estimator can be interpreted as a specific Ustatistic. We suggest a slightly simpli ed Ustatistic as estimator of the marginal density, prove that it is asymptotically normal at the same rate, and describe the asymptotic variance explicitly. We show that the estimator is asymptotically efficient if no structural assumptions are made on the innovation density. For innovation densities known to have mean zero or to be symmetric, we describe improvements of our estimator which are again asymptotically efficient.
Root n Consistent Density Estimators for Sums of Independent Random Variables
"... The density of a sum of independent random variables can be estimated by the convolution of kernel estimators for the marginal densities. We show under mild conditions that the resulting estimator is n consistent and converges in distribution in the spaces C0 (R) and L1 to a centered Gaussian pro ..."
Abstract

Cited by 20 (10 self)
 Add to MetaCart
The density of a sum of independent random variables can be estimated by the convolution of kernel estimators for the marginal densities. We show under mild conditions that the resulting estimator is n consistent and converges in distribution in the spaces C0 (R) and L1 to a centered Gaussian process.
Uniformly rootn consistent density estimators for weakly dependent invertible linear processes
 Ann. Statist
, 2007
"... Convergence rates of kernel density estimators for stationary time series are well studied. For invertible linear processes, we construct a new density estimator that converges, in the supremum norm, at the better, parametric, rate n −1/2. Our estimator is a convolution of two different residualbas ..."
Abstract

Cited by 12 (9 self)
 Add to MetaCart
(Show Context)
Convergence rates of kernel density estimators for stationary time series are well studied. For invertible linear processes, we construct a new density estimator that converges, in the supremum norm, at the better, parametric, rate n −1/2. Our estimator is a convolution of two different residualbased kernel estimators. We obtain in particular convergence rates for such residualbased kernel estimators; these results are of independent interest. 1. Introduction. The
Estimating invariant laws of linear processes by Ustatistics
"... Suppose we observe an invertible linear process with independent mean zero innovations, and with coefficients depending on a finitedimensional... ..."
Abstract

Cited by 11 (10 self)
 Add to MetaCart
Suppose we observe an invertible linear process with independent mean zero innovations, and with coefficients depending on a finitedimensional...
Rootn consistency in weighted L1spaces for density estimators of invertible linear processes
, 2008
"... Abstract. The stationary density of an invertible linear processes can be estimated at the parametric rate by a convolution of residualbased kernel estimators. We have shown elsewhere that the convergence is uniform and that a functional central limit theorem holds in the space of continuous functi ..."
Abstract

Cited by 8 (8 self)
 Add to MetaCart
Abstract. The stationary density of an invertible linear processes can be estimated at the parametric rate by a convolution of residualbased kernel estimators. We have shown elsewhere that the convergence is uniform and that a functional central limit theorem holds in the space of continuous functions vanishing at infinity. Here we show that analogous results hold in weighted L1spaces. We do not require smoothness of the innovation density. AMS 2000 subject classification. Primary: 62G07, 62G20, 62M05, 62M10. Key words and Phrases. Kernel estimator, plugin estimator, tightness criteria, functional
Improved density estimators for invertible linear processes
 Communications in Statistics, Theory & Methods
, 2009
"... Key Words: Convolution estimator; plugin estimator; local Ustatistic; empirical likelihood for dependent data; empirical likelihood with infinitely many constraints; infiniteorder moving average process; infiniteorder autoregressive process. The stationary density of a centered invertible linea ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Key Words: Convolution estimator; plugin estimator; local Ustatistic; empirical likelihood for dependent data; empirical likelihood with infinitely many constraints; infiniteorder moving average process; infiniteorder autoregressive process. The stationary density of a centered invertible linear processes can be represented as a convolution of innovationbased densities, and it can be estimated at the parametric rate by plugging residualbased kernel estimators into the convolution representation. We have shown elsewhere that a functional central limit theorem holds both in the space of continuous functions vanishing at infinity, and in weighted L1spaces. Here we show that we can improve the plugin estimator considerably, exploiting the information that the innovations are centered, and replacing the kernel estimators by weighted versions, using the empirical likelihood approach. 1
NonStandard Behavior of Density Estimators for Sums of Squared Observations”.
 Statist. Decisions
, 2009
"... Abstract. Densities of functions of two or more independent random variables can be estimated by local Ustatistics. 1. The density estimator of a sum of squares of independent observations typically slows down by a logarithmic factor. For exponents greater than two, the estimator behaves like a c ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Densities of functions of two or more independent random variables can be estimated by local Ustatistics. 1. The density estimator of a sum of squares of independent observations typically slows down by a logarithmic factor. For exponents greater than two, the estimator behaves like a classical density estimator. 2. The density estimator of a product of two independent observations typically has the rootn rate pointwise, but not in Lpnorms. An application is given to semiMarkov processes and estimation of an interarrival density that depends multiplicatively on the jump size. 3. The stationary density of a nonlinear or nonparametric autoregressive time series driven by independent innovations can be estimated by a local Ustatistic (now based on dependent observations and involving additional parameters), but the rootn rate can fail if the derivative of the autoregression function vanishes at some point.
Uniform convergence of convolution estimators for the response density in nonparametric regression.
 Bernoulli
, 2013
"... Abstract. We consider a nonparametric regression model Y = r(X) + ε with a random covariate X that is independent of the error ε. Then the density of the response Y is a convolution of the densities of ε and r(X). It can therefore be estimated by a convolution of kernel estimators for these two den ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract. We consider a nonparametric regression model Y = r(X) + ε with a random covariate X that is independent of the error ε. Then the density of the response Y is a convolution of the densities of ε and r(X). It can therefore be estimated by a convolution of kernel estimators for these two densities, or more generally by a local von Mises statistic. If the regression function has a nowhere vanishing derivative, then the convolution estimator converges at a parametric rate. We show that the convergence holds uniformly, and that the corresponding process obeys a functional central limit theorem in the space C 0 (R) of continuous functions vanishing at infinity, endowed with the supnorm. The estimator is not efficient. We construct an additive correction that makes it efficient.
Plugin estimators for higherorder transition densities in autoregression
"... Abstract. In this paper we obtain rootn consistency and functional central limit theorems in weighted L1spaces for plugin estimators of the twostep transition density in the classical stationary linear autoregressive model of order one, assuming essentially only that the innovation density has ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we obtain rootn consistency and functional central limit theorems in weighted L1spaces for plugin estimators of the twostep transition density in the classical stationary linear autoregressive model of order one, assuming essentially only that the innovation density has bounded variation. We also show that plugging in a properly weighted residualbased kernel estimator for the unknown innovation density improves on plugging in an unweighted residualbased kernel estimator. These weights are chosen to exploit the fact that the innovations have mean zero. If an efficient estimator for the autoregression parameter is used, then the weighted plugin estimator for the twostep transition density is efficient. Our approach generalizes to invertible linear processes.