Results 1 - 10
of
422
A Long-Memory Property of Stock Market Returns and a New Model
- Journal of Empirical Finance
, 1993
"... A ‘long memory ’ property of stock market returns is investigated in this paper. It is found that not only there is substantially more correlation between absolute returns than returns them-selves, but the power transformation of the absolute return lrfl ” also has quite high autocorrel-ation for lo ..."
Abstract
-
Cited by 631 (18 self)
- Add to MetaCart
A ‘long memory ’ property of stock market returns is investigated in this paper. It is found that not only there is substantially more correlation between absolute returns than returns them-selves, but the power transformation of the absolute return lrfl ” also has quite high autocorrel-ation for long lags. It is possible to characterize lrfld to be ‘long memory ’ and this property is strongest when d is around 1. This result appears to argue against ARCH type specifications based upon squared returns. But our Monte-Carlo study shows that both ARCH type models based on squared returns and those based on absolute return can produce this property. A new general class of models is proposed which allows the power 6 of the heteroskedasticity equation to be estimated from the data. 1.
Stochastic volatility: likelihood inference and comparison with ARCH models
- Review of Economic Studies
, 1998
"... In this paper, Markov chain Monte Carlo sampling methods are exploited to provide a unified, practical likelihood-based framework for the analysis of stochastic volatility models. A highly effective method is developed that samples all the unobserved volatilities at once using an approximating offse ..."
Abstract
-
Cited by 592 (40 self)
- Add to MetaCart
In this paper, Markov chain Monte Carlo sampling methods are exploited to provide a unified, practical likelihood-based framework for the analysis of stochastic volatility models. A highly effective method is developed that samples all the unobserved volatilities at once using an approximating offset mixture model, followed by an importance reweighting procedure. This approach is compared with several alternative methods using real data. The paper also develops simulation-based methods for filtering, likelihood evaluation and model failure diagnostics. The issue of model choice using non-nested likelihood ratios and Bayes factors is also investigated. These methods are used to compare the fit of stochastic volatility and GARCH models. All the procedures are illustrated in detail. 1.
Answering the Skeptics: Yes, Standard Volatility Models Do Provide Accurate Forecasts
"... Volatility permeates modern financial theories and decision making processes. As such, accurate measures and good forecasts of future volatility are critical for the implementation and evaluation of asset and derivative pricing theories as well as trading and hedging strategies. In response to this, ..."
Abstract
-
Cited by 561 (45 self)
- Add to MetaCart
Volatility permeates modern financial theories and decision making processes. As such, accurate measures and good forecasts of future volatility are critical for the implementation and evaluation of asset and derivative pricing theories as well as trading and hedging strategies. In response to this, a voluminous literature has emerged for modeling the temporal dependencies in financial market volatility at the daily and lower frequencies using ARCH and stochastic volatility type models. Most of these studies find highly significant in-sample parameter estimates and pronounced intertemporal volatility persistence. Meanwhile, when judged by standard forecast evaluation criteria, based on the squared or absolute returns over daily or longer forecast horizons, standard volatility models provide seemingly poor forecasts. The present paper demonstrates that, contrary to this contention, in empirically realistic situations the models actually produce strikingly accurate interdaily forecasts f...
Modeling and Forecasting Realized Volatility
, 2002
"... this paper is built. First, although raw returns are clearly leptokurtic, returns standardized by realized volatilities are approximately Gaussian. Second, although the distributions of realized volatilities are clearly right-skewed, the distributions of the logarithms of realized volatilities are a ..."
Abstract
-
Cited by 549 (50 self)
- Add to MetaCart
this paper is built. First, although raw returns are clearly leptokurtic, returns standardized by realized volatilities are approximately Gaussian. Second, although the distributions of realized volatilities are clearly right-skewed, the distributions of the logarithms of realized volatilities are approximately Gaussian. Third, the long-run dynamics of realized logarithmic volatilities are well approximated by a fractionally-integrated long-memory process. Motivated by the three ABDL empirical regularities, we proceed to estimate and evaluate a multivariate model for the logarithmic realized volatilities: a fractionally-integrated Gaussian vector autoregression (VAR) . Importantly, our approach explicitly permits measurement errors in the realized volatilities. Comparing the resulting volatility forecasts to those obtained from currently popular daily volatility models and more complicated high-frequency models, we find that our simple Gaussian VAR forecasts generally produce superior forecasts. Furthermore, we show that, given the theoretically motivated and empirically plausible assumption of normally distributed returns conditional on the realized volatilities, the resulting lognormal-normal mixture forecast distribution provides conditionally well-calibrated density forecasts of returns, from which we obtain accurate estimates of conditional return quantiles. In the remainder of this paper, we proceed as follows. We begin in section 2 by formally developing the relevant quadratic variation theory within a standard frictionless arbitrage-free multivariate pricing environment. In section 3 we discuss the practical construction of realized volatilities from high-frequency foreign exchange returns. Next, in section 4 we summarize the salient distributional features of r...
Image denoising using a scale mixture of Gaussians in the wavelet domain
- IEEE TRANS IMAGE PROCESSING
, 2003
"... We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vecto ..."
Abstract
-
Cited by 513 (17 self)
- Add to MetaCart
(Show Context)
We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vector and a hidden positive scalar multiplier. The latter modulates the local variance of the coefficients in the neighborhood, and is thus able to account for the empirically observed correlation between the coefficient amplitudes. Under this model, the Bayesian least squares estimate of each coefficient reduces to a weighted average of the local linear estimates over all possible values of the hidden multiplier variable. We demonstrate through simulations with images contaminated by additive white Gaussian noise that the performance of this method substantially surpasses that of previously published methods, both visually and in terms of mean squared error.
Evaluating Interval Forecasts
- International Economic Review
, 1997
"... This paper is intended to address the deficiency by clearly defining what is meant by a "good" interval forecast, and describing how to test if a given interval forecast deserves the label "good". One of the motivations of Engle's (1982) classic paper was to form dynamic int ..."
Abstract
-
Cited by 364 (11 self)
- Add to MetaCart
This paper is intended to address the deficiency by clearly defining what is meant by a "good" interval forecast, and describing how to test if a given interval forecast deserves the label "good". One of the motivations of Engle's (1982) classic paper was to form dynamic interval forecasts around point predictions. The insight was that the intervals should be narrow in tranquil times and wide in volatile times, so that the occurrences of observations outside the interval forecast would be spread out over the sample and not come in clusters. An interval forecast that 3 fails to account for higher-order dynamics may be correct on average (have correct unconditional coverage), but in any given period it will have incorrect conditional coverage characterized by clustered outliers. These concepts will be defined precisely below, and tests for correct conditional coverage are suggested. Chatfield (1993) emphasizes that model misspecification is a much more important source of poor interval forecasting than is simple estimation error. Thus, our testing criterion and the tests of this criterion are model free. In this regard, the approach taken here is similar to the one taken by Diebold and Mariano (1995). This paper can also be seen as establishing a formal framework for the ideas suggested in Granger, White and Kamstra (1989). Recently, financial market participants have shown increasing interest in interval forecasts as measures of uncertainty. Thus, we apply our methods to the interval forecasts provided by J.P. Morgan (1995). Furthermore, the so-called "Value-at-Risk" measures suggested for risk measurement correspond to tail forecasts, i.e., one-sided interval forecasts of portfolio returns. Lopez (1996) evaluates these types of forecasts applying the procedures develo...
Econometric analysis of realized volatility and its use in estimating stochastic volatility models
, 2002
"... ..."
The distribution of realized exchange rate volatility,
- Journal of the American Statistical Association
, 2001
"... Using high-frequency data on deutschemark and yen returns against the dollar, we construct model-free estimates of daily exchange rate volatility and correlation that cover an entire decade. Our estimates, termed realized volatilities and correlations, are not only model-free, but also approximatel ..."
Abstract
-
Cited by 333 (29 self)
- Add to MetaCart
Using high-frequency data on deutschemark and yen returns against the dollar, we construct model-free estimates of daily exchange rate volatility and correlation that cover an entire decade. Our estimates, termed realized volatilities and correlations, are not only model-free, but also approximately free of measurement error under general conditions, which we discuss in detail. Hence, for practical purposes, we may treat the exchange rate volatilities and correlations as observed rather than latent. We do so, and we characterize their joint distribution, both unconditionally and conditionally. Noteworthy results include a simple normality-inducing volatility transformation, high contemporaneous correlation across volatilities, high correlation between correlation and volatilities, pronounced and persistent dynamics in volatilities and correlations, evidence of long-memory dynamics in volatilities and correlations, and remarkably precise scaling laws under temporal aggregation.