Results 1  10
of
13
Testing Distributional Assumptions: A GMM Approach. Universite de Montreal
, 2005
"... In this paper, we consider testing distributional assumptions. Special cases that we consider are the Pearson’s family like the normal, Student, gamma, beta and uniform distributions. The test statistics we consider are based on a set of moment conditions. This set coincides with the first moment co ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
In this paper, we consider testing distributional assumptions. Special cases that we consider are the Pearson’s family like the normal, Student, gamma, beta and uniform distributions. The test statistics we consider are based on a set of moment conditions. This set coincides with the first moment conditions derived by Hansen and Scheinkman (1995) when one considers a continuous time model. By testing moment conditions, we treat in detail the parameter uncertainty problem when the considered variable is not observed but depends on estimators of unknown parameters. In particular, we derive moment tests that are robust against parameter uncertainty. We also consider the case where the variable of interest is serially correlated with unknown dependence by adopting a HAC approach for this purpose. This paper extends Bontemps and Meddahi (2005) who considered this approach for the normal case. Finite sample properties of our tests when the variable of interest is a Student are derived through a comprehensive Monte Carlo study. An empirical application to StudentGARCH model is presented. Keywords: Pearson’s distributions; HansenScheinkman moment conditions; parameter uncertainty; serial correlation; HAC.
Nonlinear Dynamics of Daily Cash Prices
, 1992
"... Daily cash price changes are not normally distributed. Their empirical distributions have fat tails and most are skewed. In addition, they are not independent. Among the diffusionjump, extended generalized autoregressive conditional heteroskedasticity (GARCH), and deterministic chaos processes, a G ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
Daily cash price changes are not normally distributed. Their empirical distributions have fat tails and most are skewed. In addition, they are not independent. Among the diffusionjump, extended generalized autoregressive conditional heteroskedasticity (GARCH), and deterministic chaos processes, a GARCH process with residuals following a student distribution is the most likely. Our GARCH model reduces leptokurtosis, removes nonlinear dependence, and provides a considerable improvement over the i.i.d. normal model. The GARCH process is not well calibrated because it cannot explain all the observed nonnormality, but it does yield asymptotically valid hypothesis tests. Key words: conditional heteroskedasticity, deterministic chaos, diffusionjump, leptokurtosis, market anomalies, skewness. Daily price data have been used to study lags between cash and futures prices (Oellermann and Farris; Brorsen, Bailey, and Richardson), analyze spatial price discovery (Spriggs, Kaylen, and Bessler), estimate optimal hedge ratios (Brown), or test for market anomalies such as seasonal variability (Anderson), or dayoftheweek effects (Chiang and Tapley). The majority of these studies assume price changes are normally distributed. In addition, they either assume price changes or error terms are independent or they resort to nonparametric methods with typically lower power than parametric methods. The i.i.d. normal assumption is simple and convenient. However, research on commodities ' daily spot price changes shows these changes have leptokurtic distributions rather than a normal distribution. A leptokurtic distribution has more observations around the mean and in the tails than does a normal distribution. Leptokurtosis appears also in stock prices (Fama), futures prices (Hudson, Leuthold, and Sarassoro; Hall, Brorsen, and Irwin; Gordon) and exSeungRyong Yang is a research scientist at North Dakota State
Misspecification Tests Based on Quantile
 Residuals,” HECER Discussion Paper, No 124
, 2006
"... öMmföäflsäafaäsflassflassflas ffffffffffffffffffffffffffffffffffff ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
öMmföäflsäafaäsflassflassflas ffffffffffffffffffffffffffffffffffff
An empirical likelihood ratio test for normality
, 2004
"... The empirical likelihood ratio (ELR) test for the problem of testing for normality is derived in this paper. The sampling properties of the ELR test and four other commonly used tests are provided and analyzed using the Monte Carlo simulation technique. The power comparisons against a wide range of ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The empirical likelihood ratio (ELR) test for the problem of testing for normality is derived in this paper. The sampling properties of the ELR test and four other commonly used tests are provided and analyzed using the Monte Carlo simulation technique. The power comparisons against a wide range of alternative distributions show that the ELR test is the most powerful of these tests in certain situations.
Fisher Information Test of Normality
, 1998
"... An extremal property of normal distributions is that they have the smallest Fisher Information for location among all distributions with the same variance. A new test of normality proposed by Terrell (1995) utilizes the above property by finding that density of maximum likelihood constrained on havi ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
An extremal property of normal distributions is that they have the smallest Fisher Information for location among all distributions with the same variance. A new test of normality proposed by Terrell (1995) utilizes the above property by finding that density of maximum likelihood constrained on having the expected Fisher Information under normality based on the sample variance. The test statistic is then constructed as a ratio of the resulting likelihood against that of normality. Since the asymptotic distribution of this test statistic is not available, the critical values for n = 3 to 200 have been obtained by simulation and smoothed using polynomials. An extensive power study shows that the test has superior power against distributions that are symmetric and leptokurtic (longtailed). Another advantage of the test over existing ones is the direct depiction of any deviation from normality in the form of a density estimate. This is evident when the test is applied to several real data sets. Testing of normality in residuals is also investigated. Various approaches in dealing with residuals being possibly heteroscedastic and correlated suffer from a loss of power. The approach with the fewest undesirable features is to use the Ordinary Least
2.2 The Empirical Likelihood Method........................
"... photocopying or other means, without the permission of the author. ..."
Regression assumptions in clinical psychology research practice A systematic review of common misconceptions of common misconceptions
"... Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our Misconceptions about the assumptions beh ..."
Abstract
 Add to MetaCart
Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our Misconceptions about the assumptions behind the standard linear regression model are widespread 7 and dangerous. These lead to using linear regression when inappropriate, and to employing 8 alternative procedures with less statistical power when unnecessary. Our systematic literature review 9 investigated employment and reporting of assumption checks in twelve clinical psychology journals. 10 The selected journals were representative based on impact factor. Findings indicate that normality of 11 the variables themselves, rather than of the residuals, was wrongfully held for a necessary 12 assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear 13 regression were unclear about their assumption checks, violating APArecommendations. This paper 14 appeals for a heightened awareness for and increased transparency in the reporting of statistical 15 assumption checking. 16
InformationTheoretic Deconvolution Approximation of Treatment Effect Distribution
"... This study proposes an informationtheoretic deconvolution method to approximate the entire distribution of individual treatment effect. This method uses higherorder information implied by the standard average treatment effect estimator to approximate the underlying distribution of individual treat ..."
Abstract
 Add to MetaCart
This study proposes an informationtheoretic deconvolution method to approximate the entire distribution of individual treatment effect. This method uses higherorder information implied by the standard average treatment effect estimator to approximate the underlying distribution of individual treatment effect using the method of maximum entropy density. With minimum assumptions, this method is able to approximate the treatment effect distribution even if it is entirely random or dependent on unobservable covariates. The asymptotic properties of the proposed estimator is discussed. This estimator minimizes the KullbackLeibler distance between the underlying density and the approximations. Monte Carlo simulations and experiments with real data demonstrate the efficacy and flexibility of the proposed deconvolution estimator. This method is applied to data from the U.S. Job Training Partnership ACT (JTPA) program to estimate the distribution of program impacts on individual earnings.
Tests for Symmetry of Regression Errors
, 2003
"... In this essay we discuss how to test for symmetry of unobservable errors. Two easily computable test statistics based on the estimated regression residuals are developed. Alternatively, we consider various nonparametric procedures that have been proposed in the literature to test for symmetry of obs ..."
Abstract
 Add to MetaCart
In this essay we discuss how to test for symmetry of unobservable errors. Two easily computable test statistics based on the estimated regression residuals are developed. Alternatively, we consider various nonparametric procedures that have been proposed in the literature to test for symmetry of observations; they can be used in this framework replacing the unknown errors by wellbehaved residuals. Monte Carlo experiments show that for the cases investigated, the tests under consideration have reasonably good size and power performance in small samples when critical values are obtained using a boostrap procedure. We also establish consistency property of the bootstrap. 1
InformationTheoretic Distribution Test with Application to Normality
"... We derive general distribution tests based on the method of Maximum Entropy density. The proposed tests are derived from maximizing the differential entropy subject to moment constraints. By exploiting the equivalence between the Maximum Entropy and Maximum Likelihood estimates of the general expone ..."
Abstract
 Add to MetaCart
We derive general distribution tests based on the method of Maximum Entropy density. The proposed tests are derived from maximizing the differential entropy subject to moment constraints. By exploiting the equivalence between the Maximum Entropy and Maximum Likelihood estimates of the general exponential family, we can use the conventional Likelihood Ratio, Wald and Lagrange Multiplier testing principles in the maximum entropy framework. In particular, we use the Lagrange Multiplier method to derive tests for normality and their asymptotic properties. Monte Carlo evidence suggests that the proposed tests have desirable small sample properties and often outperform commonly used tests such as the JarqueBera test and the KolmogorovSmirnovLillie test for normality. We show that the proposed tests can be extended to tests based on regression residuals and noniid data in a straightforward manner. We apply the proposed tests to the residuals from a stochastic production frontier model and reject the normality hypothesis.