Results 1 - 10
of
1,187
Comparing Predictive Accuracy
- JOURNAL OF BUSINESS AND ECONOMIC STATISTICS, 13, 253-265
, 1995
"... We propose and evaluate explicit tests of the null hypothesis of no difference in the accuracy of two competing forecasts. In contrast to previously developed tests, a wide variety of accuracy measures can be used (in particular, the loss function need not be quadratic, and need not even be symmetri ..."
Abstract
-
Cited by 1346 (23 self)
- Add to MetaCart
We propose and evaluate explicit tests of the null hypothesis of no difference in the accuracy of two competing forecasts. In contrast to previously developed tests, a wide variety of accuracy measures can be used (in particular, the loss function need not be quadratic, and need not even be symmetric), and forecast errors can be non-Gaussian, nonzero mean, serially correlated, and contemporaneously correlated. Asymptotic and exact finite sample tests are proposed, evaluated, and illustrated.
Estimating standard errors in finance panel data sets: comparing approaches.
- Review of Financial Studies
, 2009
"... Abstract In both corporate finance and asset pricing empirical work, researchers are often confronted with panel data. In these data sets, the residuals may be correlated across firms and across time, and OLS standard errors can be biased. Historically, the two literatures have used different solut ..."
Abstract
-
Cited by 890 (7 self)
- Add to MetaCart
(Show Context)
Abstract In both corporate finance and asset pricing empirical work, researchers are often confronted with panel data. In these data sets, the residuals may be correlated across firms and across time, and OLS standard errors can be biased. Historically, the two literatures have used different solutions to this problem. Corporate finance has relied on clustered standard errors, while asset pricing has used the Fama-MacBeth procedure to estimate standard errors. This paper examines the different methods used in the literature and explains when the different methods yield the same (and correct) standard errors and when they diverge. The intent is to provide intuition as to why the different approaches sometimes give different answers and give researchers guidance for their use.
Bayesian Analysis of Stochastic Volatility Models
, 1994
"... this article is to develop new methods for inference and prediction in a simple class of stochastic volatility models in which logarithm of conditional volatility follows an autoregressive (AR) times series model. Unlike the autoregressive conditional heteroscedasticity (ARCH) and gener- alized ARCH ..."
Abstract
-
Cited by 601 (26 self)
- Add to MetaCart
this article is to develop new methods for inference and prediction in a simple class of stochastic volatility models in which logarithm of conditional volatility follows an autoregressive (AR) times series model. Unlike the autoregressive conditional heteroscedasticity (ARCH) and gener- alized ARCH (GARCH) models [see Bollerslev, Chou, and Kroner (1992) for a survey of ARCH modeling], both the mean and log-volatility equations have separate error terms. The ease of evaluating the ARCH likelihood function and the ability of the ARCH specification to accommodate the timevarying volatility found in many economic time series has fostered an explosion in the use of ARCH models. On the other hand, the likelihood function for stochastic volatility models is difficult to evaluate, and hence these models have had limited empirical application
A Simple Panel Unit Root Test in the Presence of Cross Section Dependence
- JOURNAL OF APPLIED ECONOMETRICS
, 2006
"... A number of panel unit root tests that allow for cross section dependence have been proposed in the literature that use orthogonalization type procedures to asymptotically eliminate the cross dependence of the series before standard panel unit root tests are applied to the transformed series. In thi ..."
Abstract
-
Cited by 372 (16 self)
- Add to MetaCart
A number of panel unit root tests that allow for cross section dependence have been proposed in the literature that use orthogonalization type procedures to asymptotically eliminate the cross dependence of the series before standard panel unit root tests are applied to the transformed series. In this paper we propose a simple alternative where the standard ADF regressions are augmented with the cross section averages of lagged levels and first-differences of the individual series. New asymptotic results are obtained both for the individual cross sectionally augmented ADF (CADF) statistics, and their simple averages. It is shown that the individual CADF statistics are asymptotically similar and do not depend on the factor loadings. The limit distribution of the average CADF statistic is shown to exist and its critical values are tabulated. Small sample properties of the proposed test are investigated by Monte Carlo experiments. The proposed test is applied to a panel of 17 OECD real exchange rate series as well as to log real earnings of households in the PSID data.
Linear Regression Limit Theory for Nonstationary Panel Data
- ECONOMETRICA
, 1999
"... This paper develops a regression limit theory for nonstationary panel data with large numbers of cross section Ž n. and time series Ž T. observations. The limit theory allows for both sequential limits, wherein T� � followed by n��, and joint limits where T, n�� simultaneously; and the relationship ..."
Abstract
-
Cited by 312 (22 self)
- Add to MetaCart
This paper develops a regression limit theory for nonstationary panel data with large numbers of cross section Ž n. and time series Ž T. observations. The limit theory allows for both sequential limits, wherein T� � followed by n��, and joint limits where T, n�� simultaneously; and the relationship between these multidimensional limits is explored. The panel structures considered allow for no time series cointegration, heterogeneous cointegration, homogeneous cointegration, and near-homogeneous cointegration. The paper explores the existence of long-run average relations between integrated panel vectors when there is no individual time series cointegration and when there is heterogeneous cointegration. These relations are parameterized in terms of the matrix regression coefficient of the long-run average covariance matrix. In the case of homogeneous and near homogeneous cointegrating panels, a panel fully modified regression estimator is developed and studied. The limit theory enables us to test hypotheses about the long run average parameters both within and between subgroups of the full population.
On the Detection and Estimation of Long Memory in Stochastic Volatility
, 1995
"... Recent studies have suggested that stock markets' volatility has a type of long-range dependence that is not appropriately described by the usual Generalized Autoregressive Conditional Heteroskedastic (GARCH) and Exponential GARCH (EGARCH) models. In this paper, different models for describing ..."
Abstract
-
Cited by 214 (6 self)
- Add to MetaCart
Recent studies have suggested that stock markets' volatility has a type of long-range dependence that is not appropriately described by the usual Generalized Autoregressive Conditional Heteroskedastic (GARCH) and Exponential GARCH (EGARCH) models. In this paper, different models for describing this long-range dependence are examined and the properties of a Long-Memory Stochastic Volatility (LMSV) model, constructed by incorporating an Autoregressive Fractionally Integrated Moving Average (ARFIMA) process in a stochastic volatility scheme, are discussed. Strongly consistent estimators for the parameters of this LMSV model are obtained by maximizing the spectral likelihood. The distribution of the estimators is analyzed by means of a Monte Carlo study. The LMSV is applied to daily stock market returns providing an improved description of the volatility behavior. In order to assess the empirical relevance of this approach, tests for long-memory volatility are described and applied to an e...
Inattentive consumers
- Journal of Monetary Economics
, 2006
"... This paper studies the consumption decisions of agents who face costs of acquiring, absorbing and processing information. These consumers rationally choose to only sporadically update their information and re-compute their optimal consumption plans. In between updating dates, they remain inattentive ..."
Abstract
-
Cited by 187 (13 self)
- Add to MetaCart
This paper studies the consumption decisions of agents who face costs of acquiring, absorbing and processing information. These consumers rationally choose to only sporadically update their information and re-compute their optimal consumption plans. In between updating dates, they remain inattentive. This behavior implies that news disperses slowly throughout the population, so events have a gradual and delayed effect on aggregate consumption. The model predicts that aggregate consumption adjusts slowly to shocks, and is able to explain the excess sensitivity and excess smoothness puzzles. In addition, individual consumption is sensitive to ordinary and unexpected past news, but it is not sensitive to extraordinary or predictable events. The model further predicts that some people rationally choose to not plan, live hand-to-mouth, and save less, while other people sporadically update their plans. The longer are these plans, the more they save. Evidence using U.S. aggregate and microeconomic data generally supports these predictions.