Results 1  10
of
10
Root n Consistent and Optimal Density Estimators for Moving Average Processes
"... The marginal density of a first order moving average process can be written as convolution of two innovation densities. Saavedra and Cao (2000) propose to estimate the marginal density by plugging in kernel density estimators for the innovation densities, based on estimated innovations. They obtain ..."
Abstract

Cited by 21 (17 self)
 Add to MetaCart
The marginal density of a first order moving average process can be written as convolution of two innovation densities. Saavedra and Cao (2000) propose to estimate the marginal density by plugging in kernel density estimators for the innovation densities, based on estimated innovations. They obtain that for an appropriate choice of bandwidth the variance of their estimator decreases at the rate 1/n. Their estimator can be interpreted as a specific Ustatistic. We suggest a slightly simpli ed Ustatistic as estimator of the marginal density, prove that it is asymptotically normal at the same rate, and describe the asymptotic variance explicitly. We show that the estimator is asymptotically efficient if no structural assumptions are made on the innovation density. For innovation densities known to have mean zero or to be symmetric, we describe improvements of our estimator which are again asymptotically efficient.
Estimating Linear Functionals of the Error Distribution in Nonparametric Regression
"... This paper addresses estimation of linear functionals of the error distribution in nonparametric regression models. It derives an i.i.d. representation for the empirical estimator based on residuals, using undersmoothed estimators for the regression curve. Asymptotic eciency of the estimator is p ..."
Abstract

Cited by 17 (9 self)
 Add to MetaCart
This paper addresses estimation of linear functionals of the error distribution in nonparametric regression models. It derives an i.i.d. representation for the empirical estimator based on residuals, using undersmoothed estimators for the regression curve. Asymptotic eciency of the estimator is proved. Estimation of the error variance is discussed in detail.
HIGH MOMENT PARTIAL SUM PROCESSES OF RESIDUALS IN GARCH MODELS AND THEIR APPLICATIONS 1
, 2006
"... In this paper we construct high moment partial sum processes based on residuals of a GARCH model when the mean is known to be 0. We consider partial sums of kth powers of residuals, CUSUM processes and selfnormalized partial sum processes. The kth power partial sum process converges to a Brownian p ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
In this paper we construct high moment partial sum processes based on residuals of a GARCH model when the mean is known to be 0. We consider partial sums of kth powers of residuals, CUSUM processes and selfnormalized partial sum processes. The kth power partial sum process converges to a Brownian process plus a correction term, where the correction term depends on the kth moment µk of the innovation sequence. If µk = 0, then the correction term is 0 and, thus, the kth power partial sum process converges weakly to the same Gaussian process as does the kth power partial sum of the i.i.d. innovations sequence. In particular, since µ1 = 0, this holds for the first moment partial sum process, but fails for the second moment partial sum process. We also consider the CUSUM and the selfnormalized processes, that is, standardized by the residual sample variance. These behave as if the residuals were asymptotically i.i.d. We also study the joint distribution of the kth and (k + 1)st selfnormalized partial sum processes. Applications to changepoint problems and goodnessoffit are considered, in particular, CUSUM statistics for testing GARCH model structure change and the Jarque– Bera omnibus statistic for testing normality of the unobservable innovation distribution of a GARCH model. The use of residuals for constructing a kernel density function estimation of the innovation distribution is discussed.
Estimating invariant laws of linear processes by Ustatistics
"... Suppose we observe an invertible linear process with independent mean zero innovations, and with coefficients depending on a finitedimensional... ..."
Abstract

Cited by 11 (10 self)
 Add to MetaCart
Suppose we observe an invertible linear process with independent mean zero innovations, and with coefficients depending on a finitedimensional...
Asymptotic theory for ARCHSM models: LAN and residual empirical processes
 Statist. Sinica
, 2005
"... In this paper, we study the two asymptotic objectives: the LAN and the residual empirical process in a class of ARCH(∞)SM (stochastic mean) models, which covers finiteorder ARCH and GARCH models. First, we establish the LAN for the ARCH(∞)SM model, and based on it, construct an asymptotically opt ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
In this paper, we study the two asymptotic objectives: the LAN and the residual empirical process in a class of ARCH(∞)SM (stochastic mean) models, which covers finiteorder ARCH and GARCH models. First, we establish the LAN for the ARCH(∞)SM model, and based on it, construct an asymptotically optimal test when the parameter vector contains a nuisance parameter. Also, we discuss asymptotically efficient estimators for unknown parameters under the circumstances: (i) the innovation density is known, and (ii) it is unknown. For the residual empirical process, we investigate its asymptotic behavior in ARCH(q)SM models. We show that unlike the usual autoregressive model, the limiting distribution in this case depends upon the estimator of the regression parameter as well as those of the ARCH parameters.
Efficient Estimation in Invertible Linear Processes
"... An invertible causal linear process is a process which has infinite order moving average and autoregressive representations. We assume that the coefficients in these representations depend on a Euclidean parameter, while the corresponding innovations have an unknown centered distribution with some m ..."
Abstract

Cited by 7 (7 self)
 Add to MetaCart
An invertible causal linear process is a process which has infinite order moving average and autoregressive representations. We assume that the coefficients in these representations depend on a Euclidean parameter, while the corresponding innovations have an unknown centered distribution with some moment restrictions. We discuss efficient estimation of differentiable functionals in such a semiparametric model. For this we first obtain a suitable semiparametric version of local asymptotic normality and then use Hajek's convolution theorem to characterize efficient estimators. Then we apply this result to construct efficient estimators of the Euclidean parameter and of linear functionals of the innovation distribution.
Some Developments in Semiparametric Statistics
"... In this paper we describe the historical development of some parts of semiparametric statistics. The emphasis is on efficient estimation. We understand semiparametric model in the general sense of a model that is neither parametric nor nonparametric. We restrict attention to models with independent ..."
Abstract
 Add to MetaCart
In this paper we describe the historical development of some parts of semiparametric statistics. The emphasis is on efficient estimation. We understand semiparametric model in the general sense of a model that is neither parametric nor nonparametric. We restrict attention to models with independent and identically distributed observations and to time series.
order moving average and autoregressive representations
"... Abstract. An invertible causal linear process is a process which has infinite order moving average and autoregressive representations. We assume that the coefficients in these representations depend on a Euclidean parameter, while the corresponding innovations have an unknown centered distribution ..."
Abstract
 Add to MetaCart
Abstract. An invertible causal linear process is a process which has infinite order moving average and autoregressive representations. We assume that the coefficients in these representations depend on a Euclidean parameter, while the corresponding innovations have an unknown centered distribution with some moment restrictions. We discuss efficient estimation of differentiable functionals in such a semiparametric model. For this we first obtain a suitable semiparametric version of local asymptotic normality and then use Hájek’s convolution theorem to characterize efficient estimators. Then we apply this result to construct efficient estimators of the Euclidean parameter and of linear functionals of the innovation distribution. Key words: Time series, nonlinear process of increasing order, empirical estimator, local asymptotic normality, asymptotically linear estimator, influence function, adaptive estimator, regular estimator, least dispersed estimator, contiguity.
Universität Bremen
"... Suppose we have independent observations from a distribution which we know to fulfill a finitedimensional linear constraint involving an unknown finitedimensional parameter. We construct efficient estimators for finitedimensional functionals of the distribution. The estimators are obtained by f ..."
Abstract
 Add to MetaCart
Suppose we have independent observations from a distribution which we know to fulfill a finitedimensional linear constraint involving an unknown finitedimensional parameter. We construct efficient estimators for finitedimensional functionals of the distribution. The estimators are obtained by first constructing an efficient estimator for the functional when the parameter is known, and then replacing the parameter by an efficient estimator. We consider in particular estimation of expectations. AMS 2000 subject classifications. Primary 62G05, 62G20. Key words and Phrases. Pluginestimator, estimating equation, method of mo