Results 1 - 10
of
364
Strictly Proper Scoring Rules, Prediction, and Estimation
, 2007
"... Scoring rules assess the quality of probabilistic forecasts, by assigning a numerical score based on the predictive distribution and on the event or value that materializes. A scoring rule is proper if the forecaster maximizes the expected score for an observation drawn from the distribution F if he ..."
Abstract
-
Cited by 373 (28 self)
- Add to MetaCart
Scoring rules assess the quality of probabilistic forecasts, by assigning a numerical score based on the predictive distribution and on the event or value that materializes. A scoring rule is proper if the forecaster maximizes the expected score for an observation drawn from the distribution F if he or she issues the probabilistic forecast F, rather than G ̸ = F. It is strictly proper if the maximum is unique. In prediction problems, proper scoring rules encourage the forecaster to make careful assessments and to be honest. In estimation problems, strictly proper scoring rules provide attractive loss and utility functions that can be tailored to the problem at hand. This article reviews and develops the theory of proper scoring rules on general probability spaces, and proposes and discusses examples thereof. Proper scoring rules derive from convex functions and relate to information measures, entropy functions, and Bregman divergences. In the case of categorical variables, we prove a rigorous version of the Savage representation. Examples of scoring rules for probabilistic forecasts in the form of predictive densities include the logarithmic, spherical, pseudospherical, and quadratic scores. The continuous ranked probability score applies to probabilistic forecasts that take the form of predictive cumulative distribution functions. It generalizes the absolute error and forms a special case of a new and very general type of score, the energy score. Like many other scoring rules, the energy score admits a kernel representation in terms of negative definite functions, with links to inequalities of Hoeffding type, in both univariate and multivariate settings. Proper scoring rules for quantile and interval forecasts are also discussed. We relate proper scoring rules to Bayes factors and to cross-validation, and propose a novel form of cross-validation known as random-fold cross-validation. A case study on probabilistic weather forecasts in the North American Pacific Northwest illustrates the importance of propriety. We note optimum score approaches to point and quantile
Modelling asymmetric exchange rate dependence
- International Economic Review
"... We test for asymmetry in a model of the dependence between the Deutsche mark and the yen, in the sense that a different degree of correlation is exhibited during joint appreciations against the U.S. dollar versus during joint depreciations. We consider an extension of the theory of copulas to allow ..."
Abstract
-
Cited by 243 (6 self)
- Add to MetaCart
We test for asymmetry in a model of the dependence between the Deutsche mark and the yen, in the sense that a different degree of correlation is exhibited during joint appreciations against the U.S. dollar versus during joint depreciations. We consider an extension of the theory of copulas to allow for conditioning variables, and employ it to construct flexible models of the conditional dependence structure of these exchange rates. We find evidence that the mark–dollar and yen–dollar exchange rates are more correlated when they are depreciating against the dollar than when they are appreciating. 1.
Roughing It Up: Including Jump Components in the Measurement, Modeling and Forecasting of Return Volatility
- REVIEW OF ECONOMICS AND STATISTICS, FORTHCOMING
, 2006
"... A rapidly growing literature has documented important improvements in financial return volatility measurement and forecasting via use of realized variation measures constructed from high-frequency returns coupled with simple modeling procedures. Building on recent theoretical results in Barndorff-Ni ..."
Abstract
-
Cited by 166 (11 self)
- Add to MetaCart
A rapidly growing literature has documented important improvements in financial return volatility measurement and forecasting via use of realized variation measures constructed from high-frequency returns coupled with simple modeling procedures. Building on recent theoretical results in Barndorff-Nielsen and Shephard (2004a, 2005) for related bi-power variation measures, the present paper provides a practical and robust framework for non-parametrically measuring the jump component in asset return volatility. In an application to the DM/ $ exchange rate, the S&P500 market index, and the 30-year U.S. Treasury bond yield, we find that jumps are both highly prevalent and distinctly less persistent than the continuous sample path variation process. Moreover, many jumps appear directly associated with specific macroeconomic news announcements. Separating jump from non-jump movements in a simple but sophisticated volatility forecasting model, we find that almost all of the predictability in daily, weekly, and monthly return volatilities comes from the non-jump component. Our results thus set the stage for a number of interesting future econometric developments and important financial applications by separately modeling, forecasting, and pricing the continuous and jump components of the total return variation process.
Multivariate Density Forecast Evaluation and Calibration
- in Financial Risk Management: High-Frequency Returns on Foreign Exchange,” Review of Economics and Statistics
, 1999
"... educational and research purposes, so long as it is not altered, this copyright notice is reproduced with it, and it is not sold for profit. Abstract: We provide a framework for evaluating and improving multivariate density forecasts. Among other things, the multivariate framework lets us evaluate t ..."
Abstract
-
Cited by 132 (15 self)
- Add to MetaCart
(Show Context)
educational and research purposes, so long as it is not altered, this copyright notice is reproduced with it, and it is not sold for profit. Abstract: We provide a framework for evaluating and improving multivariate density forecasts. Among other things, the multivariate framework lets us evaluate the adequacy of density forecasts involving cross-variable interactions, such as time-varying conditional correlations. We also provide conditions under which a technique of density forecast “calibration ” can be used to improve deficient density forecasts. Finally, motivated by recent advances in financial risk management, we provide a detailed application to multivariate highfrequency exchange rate density forecasts.
Probabilistic forecasts, calibration and sharpness
- Journal of the Royal Statistical Society Series B
, 2007
"... Summary. Probabilistic forecasts of continuous variables take the form of predictive densities or predictive cumulative distribution functions. We propose a diagnostic approach to the evaluation of predictive performance that is based on the paradigm of maximizing the sharpness of the predictive dis ..."
Abstract
-
Cited by 116 (23 self)
- Add to MetaCart
(Show Context)
Summary. Probabilistic forecasts of continuous variables take the form of predictive densities or predictive cumulative distribution functions. We propose a diagnostic approach to the evaluation of predictive performance that is based on the paradigm of maximizing the sharpness of the predictive distributions subject to calibration. Calibration refers to the statistical consistency between the distributional forecasts and the observations and is a joint property of the predictions and the events that materialize. Sharpness refers to the concentration of the predictive distributions and is a property of the forecasts only. A simple theoretical framework allows us to distinguish between probabilistic calibration, exceedance calibration and marginal calibration. We propose and study tools for checking calibration and sharpness, among them the probability integral transform histogram, marginal calibration plots, the sharpness diagram and proper scoring rules. The diagnostic approach is illustrated by an assessment and ranking of probabilistic forecasts of wind speed at the Stateline wind energy centre in the US Pacific Northwest. In combination with cross-validation or in the time series context, our proposal provides very general, nonparametric alternatives to the use of information criteria for model diagnostics and model selection.
Analysis of High Dimensional Multivariate Stochastic Volatility Models
, 2004
"... This paper is concerned with the Bayesian estimation and comparison of flexible, high dimensional multivariate time series models with time varying correlations. The model proposed and considered here combines features of the classical factor model with that of the heavy tailed univariate stochastic ..."
Abstract
-
Cited by 100 (13 self)
- Add to MetaCart
This paper is concerned with the Bayesian estimation and comparison of flexible, high dimensional multivariate time series models with time varying correlations. The model proposed and considered here combines features of the classical factor model with that of the heavy tailed univariate stochastic volatility model. A unified analysis of the model, and its special cases, is developed that encompasses estimation, filtering and model choice. The centerpieces of the estimation algorithm (which relies on MCMC methods) are (1) a reduced blocking scheme for sampling the free elements of the loading matrix and the factors and (2) a special method for sampling the parameters of the univariate SV process. The resulting algorithm is scalable in terms of series and factors and simulation-efficient. Methods for estimating the log-likelihood function and the filtered values of the time-varying volatilities and correlations are also provided. The performance and effectiveness of the inferential methods are extensively tested using simulated data where models up to 50 dimensions and 688 parameters are fitted and studied. The performance of our model, in relation to multivariate GARCH models, is also evaluated using a real data set of weekly returns on a set of 10 international stock indices. We consider the performance along two dimensions: the ability to correctly estimate the conditional covariance matrix of future returns and the unconditional and conditional coverage of the 5 % and 1% Value-at-Risk (VaR) measures of four pre-defined portfolios.
How accurate are value-at-risk models at commercial banks
- Journal of Finance
, 2002
"... In recent years, the trading accounts at large commercial banks have grown substantially and become progressively more diverse and complex. We provide descriptive statistics on the trading revenues from such activities and on the associated Value-at-Risk forecasts internally estimated by banks. For ..."
Abstract
-
Cited by 84 (1 self)
- Add to MetaCart
In recent years, the trading accounts at large commercial banks have grown substantially and become progressively more diverse and complex. We provide descriptive statistics on the trading revenues from such activities and on the associated Value-at-Risk forecasts internally estimated by banks. For a sample of large bank holding companies, we evaluate the performance of banks ’ trading risk models by examining the statistical accuracy of the VaR forecasts. Although a substantial literature has examined the statistical and economic meaning of Value-at-Risk models, this article is the first to provide a detailed analysis of the performance of models actually in use.
Pooling of forecasts
- Econometrics Journal
, 2004
"... We consider forecasting using a combination, when no model coincides with a non-constant data generation process (DGP). Practical experience suggests that combining forecasts adds value, and can even dominate the best individual device. We show why this can occur when forecasting mod-els are differe ..."
Abstract
-
Cited by 76 (11 self)
- Add to MetaCart
We consider forecasting using a combination, when no model coincides with a non-constant data generation process (DGP). Practical experience suggests that combining forecasts adds value, and can even dominate the best individual device. We show why this can occur when forecasting mod-els are differentially mis-specified, and is likely to occur when the DGP is subject to location shifts. Moreover, averaging may then dominate over estimated weights in the combination. Finally, it can-not be proved that only non-encompassed devices should be retained in the combination. Empirical and Monte Carlo illustrations confirm the analysis. Journal of Economic Literature classification: C32.
Evaluating Density Forecasts of Inflation: The Survey of Professional Forecasters
- in Cointegration, Causality, and Forecasting: A Festschrift in Honour of Clive
, 1999
"... educational and research purposes, so long as it is not altered, this copyright notice is reproduced with it, and it is not sold for profit. Abstract: Since 1968, the Survey of Professional Forecasters has asked respondents to provide a complete probability distribution of expected future inflation. ..."
Abstract
-
Cited by 76 (15 self)
- Add to MetaCart
(Show Context)
educational and research purposes, so long as it is not altered, this copyright notice is reproduced with it, and it is not sold for profit. Abstract: Since 1968, the Survey of Professional Forecasters has asked respondents to provide a complete probability distribution of expected future inflation. We evaluate the adequacy of those density forecasts using the framework of Diebold, Gunther and Tay (1998). The analysis reveals several interesting features of the density forecasts in relation to realized inflation, including several deficiencies of the forecasts. The probability of a large negative inflation shock is generally overestimated, and in more recent years the probability of a large shock of either sign is overestimated. Inflation surprises are serially correlated, although agents eventually adapt. Expectations of low inflation are associated with reduced uncertainty. The results suggest several promising directions for future research.