Results 1  10
of
23
QuasiLikelihood Models and Optimal Inference
 Ann. Statist
"... Consider an ergodic Markov chain on the real line, with parametric models for the conditional mean and variance of the transition distribution. Such a setting is an instance of a quasilikelihood model. The customary estimator for the parameter is the maximum quasilikelihood estimator. It is not ef ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
Consider an ergodic Markov chain on the real line, with parametric models for the conditional mean and variance of the transition distribution. Such a setting is an instance of a quasilikelihood model. The customary estimator for the parameter is the maximum quasilikelihood estimator. It is not efficient, but as good as the best estimator that ignores the parametric model for the conditional variance. We construct two efficient estimators. One is a convex combination of solutions of two estimating equations, the other a weighted nonlinear onestep least squares estimator, with weights involving predictors for the third and fourth centered conditional moments of the transition distribution. Additional restrictions on the model can lead to further improvement. We illustrate this with an autoregressive model whose error variance is related to the autoregression parameter. 1 Introduction According to Wedderburn (1974), a quasilikelihood model is defined by a relation between mean and v...
Reversible Markov chains and optimality of symmetrized empirical estimators
 Bernoulli
, 1998
"... Suppose we want to estimate the expectation of a function of two arguments under the stationary distribution of two successive observations of a reversible Markov chain. Then the usual empirical estimator can be improved by symmetrizing. We show that the symmetrized estimator is efficient. We point ..."
Abstract

Cited by 11 (8 self)
 Add to MetaCart
Suppose we want to estimate the expectation of a function of two arguments under the stationary distribution of two successive observations of a reversible Markov chain. Then the usual empirical estimator can be improved by symmetrizing. We show that the symmetrized estimator is efficient. We point out applications to discretely observed continuoustime processes. The proof is based on a result for general Markov chain models which can be used to characterize efficient estimators in any model defined by restrictions on the stationary distribution of a single or two successive observations. 1 Introduction Suppose we observe X 0 ; : : : ; X n from an ergodic Markov chain with unknown transition distribution Q(x; dy) and invariant distribution ß(dx). We want to estimate the expectation of a function f(x; y) under the joint stationary distribution (ß\Omega Q)(dx; dy) = ß(dx)Q(x; dy) of two successive observations. Greenwood and Wefelmeyer (1995) show that the empirical estimator E n f =...
Estimating invariant laws of linear processes by Ustatistics
"... Suppose we observe an invertible linear process with independent mean zero innovations, and with coefficients depending on a finitedimensional... ..."
Abstract

Cited by 11 (10 self)
 Add to MetaCart
Suppose we observe an invertible linear process with independent mean zero innovations, and with coefficients depending on a finitedimensional...
The information in the marginal law of a Markov chain
, 1998
"... If we have a parametric model for the invariant distribution of a Markov chain but cannot or do not want to use any information about the transition distribution (except, perhaps, that the chain is reversible)  what, then, is the best use we can make of the observations? It is not optimal to proc ..."
Abstract

Cited by 11 (9 self)
 Add to MetaCart
If we have a parametric model for the invariant distribution of a Markov chain but cannot or do not want to use any information about the transition distribution (except, perhaps, that the chain is reversible)  what, then, is the best use we can make of the observations? It is not optimal to proceed as if the observations were i.i.d. We determine a lower bound for the asymptotic variance of estimators and construct efficient estimators. The results apply in particular to discretely observed diffusions. 1 Introduction Suppose we have a parametric diffusion model and observe one path at a large number of discrete time points. Then the observations form a Markov chain. The transition distribution is often difficult to calculate, but the invariant distribution is usually tractable. Recent references on estimation in such models are Bibby and Sørensen (1995), Pedersen (1995), Kessler and Sørensen (1995), AitSahalia (1996a, 1996b, 1997) and Elerian, Chib and Shephard (1998). If we canno...
Adaptive estimators for parameters of the autoregression function of a Markov chain
"... Suppose we observe an ergodic Markov chain on the real line, with a parametric model for the autoregression function, i.e. the conditional mean of the transition distribution. If one specifies, in addition, a parametric model for the conditional variance, one can define a simple estimator for the pa ..."
Abstract

Cited by 11 (8 self)
 Add to MetaCart
Suppose we observe an ergodic Markov chain on the real line, with a parametric model for the autoregression function, i.e. the conditional mean of the transition distribution. If one specifies, in addition, a parametric model for the conditional variance, one can define a simple estimator for the parameter, the maximum quasilikelihood estimator. It is robust against misspecification of the conditional variance, but not efficient. We construct an estimator which is adaptive in the sense that it is efficient if the conditional variance is misspecified, and asymptotically as good as the maximum quasilikelihood estimator if the conditional variance is correctly specified. The adaptive estimator is a weighted nonlinear least squares estimator, with weights given by predictors for the conditional variance.
Improved estimators for constrained Markov chain models
"... Suppose we observe an ergodic Markov chain and know that the stationary law of one or two successive observations fullls a linear constraint. We show how to improve given estimators exploiting this knowledge, and prove that the best of these estimators is efficient. ..."
Abstract

Cited by 7 (7 self)
 Add to MetaCart
Suppose we observe an ergodic Markov chain and know that the stationary law of one or two successive observations fullls a linear constraint. We show how to improve given estimators exploiting this knowledge, and prove that the best of these estimators is efficient.
Outperforming the Gibbs sampler empirical estimator for nearest neighbor random fields
, 1996
"... Given a Markov chain sampling scheme, does the standard empirical estimator make best use of the data? We show that this is not so and construct better estimators. We restrict attention to nearest neighbor random fields and to Gibbs samplers with deterministic sweep, but our approach applies to any ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Given a Markov chain sampling scheme, does the standard empirical estimator make best use of the data? We show that this is not so and construct better estimators. We restrict attention to nearest neighbor random fields and to Gibbs samplers with deterministic sweep, but our approach applies to any sampler that uses reversible variableatatime updating with deterministic sweep. The structure of the transition distribution of the sampler is exploited to construct further empirical estimators that are combined with the standard empirical estimator to reduce asymptotic variance. The extra computational cost is negligible. When the random field is spatially homogeneous, symmetrizations of our estimator lead to further variance reduction. The performance of the estimators is evaluated in a simulation study of the Ising model.
Maximum likelihood estimator and KullbackLeibler information in misspecified Markov chain models
 Teor. Veroyatnost. i Primenen
"... Suppose we have specified a parametric model for the transition distribution of a Markov chain, but that the true transition distribution does not belong to the model. Then the maximum likelihood estimator estimates the parameter which maximizes the KullbackLeibler information between the true tra ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
Suppose we have specified a parametric model for the transition distribution of a Markov chain, but that the true transition distribution does not belong to the model. Then the maximum likelihood estimator estimates the parameter which maximizes the KullbackLeibler information between the true transition distribution and the model. We prove that the maximum likelihood estimator is asymptotically efficient in a nonparametric sense if the true transition distribution is unknown. 1 Introduction Suppose we observe X 0 ; : : : ; X n from an ergodic Markov chain on an arbitrary state space. We have specified a parametric model Q # (x; dy) for the transition distribution, and an initial distribution j 0 (dx). Consider the following two situations: 1. We believe, erroneously, that the model is correct, and use the maximum likelihood estimator for estimating the parameter. 2. We know that the model is incorrect, and want to fit a transition distribution from the model to the true transition ...
Estimating Joint Distributions Of Markov Chains
 IN PREPARATION
, 1998
"... Suppose we observe a stationary Markov chain with unknown transition distribution. The empirical estimator for the expectation of a function of two successive observations is known to be efficient. For reversible Markov chains, an appropriate symmetrization is efficient. For functions of more than ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Suppose we observe a stationary Markov chain with unknown transition distribution. The empirical estimator for the expectation of a function of two successive observations is known to be efficient. For reversible Markov chains, an appropriate symmetrization is efficient. For functions of more than two arguments, these estimators cease to be efficient. We determine the influence function of efficient estimators of expectations of functions of several observations, both for completely unknown and for reversible Markov chains. We construct simple efficient estimators in both cases.
Empirical estimators for semiMarkov processes
 Math. Meth. Statist
, 1996
"... A semiMarkov process stays in state x for a time s and then jumps to state y according to a transition distribution Q(x; dy; ds). A statistical model is described by a family of such transition distributions. We give conditions for a nonparametric version of local asymptotic normality as the obser ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
A semiMarkov process stays in state x for a time s and then jumps to state y according to a transition distribution Q(x; dy; ds). A statistical model is described by a family of such transition distributions. We give conditions for a nonparametric version of local asymptotic normality as the observation time tends to infinity. Then we introduce `empirical' estimators for linear functionals of the distribution ß(dx)Q(x; dy; ds), with ß denoting the invariant distribution of the embedded Markov chain, and characterize the empirical estimators which are efficient for a given model. We discuss efficiency of several classical estimators, in particular the jump frequency, the proportion of visits to a given set, the proportion of time spent in a set, and an estimator for Q (x; fyg \Theta [0; t]) suggested by Moore and Pyke (1968) for countable state space. 1 Introduction A semiMarkov process Y = (Y t ) t0 is a process on the time interval [0; 1), with values in some arbitrary state spac...