Results 1  10
of
352
Comparing Predictive Accuracy
 JOURNAL OF BUSINESS AND ECONOMIC STATISTICS, 13, 253265
, 1995
"... We propose and evaluate explicit tests of the null hypothesis of no difference in the accuracy of two competing forecasts. In contrast to previously developed tests, a wide variety of accuracy measures can be used (in particular, the loss function need not be quadratic, and need not even be symmetri ..."
Abstract

Cited by 1309 (26 self)
 Add to MetaCart
We propose and evaluate explicit tests of the null hypothesis of no difference in the accuracy of two competing forecasts. In contrast to previously developed tests, a wide variety of accuracy measures can be used (in particular, the loss function need not be quadratic, and need not even be symmetric), and forecast errors can be nonGaussian, nonzero mean, serially correlated, and contemporaneously correlated. Asymptotic and exact finite sample tests are proposed, evaluated, and illustrated.
Empirical exchange rate models of the Seventies: do they fit out of sample?
 JOURNAL OF INTERNATIONAL ECONOMICS
, 1983
"... This study compares the outofsample forecasting accuracy of various structural and time series exchange rate models. We find that a random walk model performs as well as any estimated model at one to twelve month horizons for the dollar/pound, dollar/mark, dollar/yen and tradeweighted dollar exch ..."
Abstract

Cited by 831 (12 self)
 Add to MetaCart
This study compares the outofsample forecasting accuracy of various structural and time series exchange rate models. We find that a random walk model performs as well as any estimated model at one to twelve month horizons for the dollar/pound, dollar/mark, dollar/yen and tradeweighted dollar exchange rates. The candidate structural models include the flexibleprice (FrenkelBilson) and stickyprice (DornbuschFrankel) monetary models, and a stickyprice model which incorporates the current account (HooperMorton). The structural models perform poorly despite the fact that we base their forecasts on actual realized values of future explanatory variables.
Evaluating the Accuracy of SamplingBased Approaches to the Calculation of Posterior Moments
 IN BAYESIAN STATISTICS
, 1992
"... Data augmentation and Gibbs sampling are two closely related, samplingbased approaches to the calculation of posterior moments. The fact that each produces a sample whose constituents are neither independent nor identically distributed complicates the assessment of convergence and numerical accurac ..."
Abstract

Cited by 583 (14 self)
 Add to MetaCart
Data augmentation and Gibbs sampling are two closely related, samplingbased approaches to the calculation of posterior moments. The fact that each produces a sample whose constituents are neither independent nor identically distributed complicates the assessment of convergence and numerical accuracy of the approximations to the expected value of functions of interest under the posterior. In this paper methods from spectral analysis are used to evaluate numerical accuracy formally and construct diagnostics for convergence. These methods are illustrated in the normal linear model with informative priors, and in the Tobitcensored regression model.
Using simulation methods for Bayesian econometric models: Inference, development and communication
 Econometric Review
, 1999
"... This paper surveys the fundamental principles of subjective Bayesian inference in econometrics and the implementation of those principles using posterior simulation methods. The emphasis is on the combination of models and the development of predictive distributions. Moving beyond conditioning on a ..."
Abstract

Cited by 356 (19 self)
 Add to MetaCart
(Show Context)
This paper surveys the fundamental principles of subjective Bayesian inference in econometrics and the implementation of those principles using posterior simulation methods. The emphasis is on the combination of models and the development of predictive distributions. Moving beyond conditioning on a fixed number of completely specified models, the paper introduces subjective Bayesian tools for formal comparison of these models with as yet incompletely specified models. The paper then shows how posterior simulators can facilitate communication between investigators (for example, econometricians) on the one hand and remote clients (for example, decision makers) on the other, enabling clients to vary the prior distributions and functions of interest employed by investigators. A theme of the paper is the practicality of subjective Bayesian methods. To this end, the paper describes publicly available software for Bayesian inference, model development, and communication and provides illustrations using two simple econometric models. *This paper was originally prepared for the Australasian meetings of the Econometric Society in Melbourne, Australia,
Linear Regression Limit Theory for Nonstationary Panel Data
 ECONOMETRICA
, 1999
"... This paper develops a regression limit theory for nonstationary panel data with large numbers of cross section Ž n. and time series Ž T. observations. The limit theory allows for both sequential limits, wherein T� � followed by n��, and joint limits where T, n�� simultaneously; and the relationship ..."
Abstract

Cited by 308 (22 self)
 Add to MetaCart
This paper develops a regression limit theory for nonstationary panel data with large numbers of cross section Ž n. and time series Ž T. observations. The limit theory allows for both sequential limits, wherein T� � followed by n��, and joint limits where T, n�� simultaneously; and the relationship between these multidimensional limits is explored. The panel structures considered allow for no time series cointegration, heterogeneous cointegration, homogeneous cointegration, and nearhomogeneous cointegration. The paper explores the existence of longrun average relations between integrated panel vectors when there is no individual time series cointegration and when there is heterogeneous cointegration. These relations are parameterized in terms of the matrix regression coefficient of the longrun average covariance matrix. In the case of homogeneous and near homogeneous cointegrating panels, a panel fully modified regression estimator is developed and studied. The limit theory enables us to test hypotheses about the long run average parameters both within and between subgroups of the full population.
On the Detection and Estimation of Long Memory in Stochastic Volatility
, 1995
"... Recent studies have suggested that stock markets' volatility has a type of longrange dependence that is not appropriately described by the usual Generalized Autoregressive Conditional Heteroskedastic (GARCH) and Exponential GARCH (EGARCH) models. In this paper, different models for describing ..."
Abstract

Cited by 207 (6 self)
 Add to MetaCart
Recent studies have suggested that stock markets' volatility has a type of longrange dependence that is not appropriately described by the usual Generalized Autoregressive Conditional Heteroskedastic (GARCH) and Exponential GARCH (EGARCH) models. In this paper, different models for describing this longrange dependence are examined and the properties of a LongMemory Stochastic Volatility (LMSV) model, constructed by incorporating an Autoregressive Fractionally Integrated Moving Average (ARFIMA) process in a stochastic volatility scheme, are discussed. Strongly consistent estimators for the parameters of this LMSV model are obtained by maximizing the spectral likelihood. The distribution of the estimators is analyzed by means of a Monte Carlo study. The LMSV is applied to daily stock market returns providing an improved description of the volatility behavior. In order to assess the empirical relevance of this approach, tests for longmemory volatility are described and applied to an e...
Perspectives on system identification
 In Plenary talk at the proceedings of the 17th IFAC World Congress, Seoul, South Korea
, 2008
"... System identification is the art and science of building mathematical models of dynamic systems from observed inputoutput data. It can be seen as the interface between the real world of applications and the mathematical world of control theory and model abstractions. As such, it is an ubiquitous ne ..."
Abstract

Cited by 160 (3 self)
 Add to MetaCart
System identification is the art and science of building mathematical models of dynamic systems from observed inputoutput data. It can be seen as the interface between the real world of applications and the mathematical world of control theory and model abstractions. As such, it is an ubiquitous necessity for successful applications. System identification is a very large topic, with different techniques that depend on the character of the models to be estimated: linear, nonlinear, hybrid, nonparametric etc. At the same time, the area can be characterized by a small number of leading principles, e.g. to look for sustainable descriptions by proper decisions in the triangle of model complexity, information contents in the data, and effective validation. The area has many facets and there are many approaches and methods. A tutorial or a survey in a few pages is not quite possible. Instead, this presentation aims at giving an overview of the “science ” side, i.e. basic principles and results and at pointing to open problem areas in the practical, “art”, side of how to approach and solve a real problem. 1.
Bayesian Treatment of the Independent Studentt Linear Model
 JOURNAL OF APPLIED ECONOMETRICS
, 1993
"... This article takes up methods for Bayesian inference in a linear model in which the disturbances are independent and have identical Studentt distributions. It exploits the equivalence of the Studentt distribution and an appropriate scale mixture of normals, and uses a Gibbs sampler to perform the ..."
Abstract

Cited by 128 (2 self)
 Add to MetaCart
This article takes up methods for Bayesian inference in a linear model in which the disturbances are independent and have identical Studentt distributions. It exploits the equivalence of the Studentt distribution and an appropriate scale mixture of normals, and uses a Gibbs sampler to perform the computations. The new method is applied to some wellknown macroeconomic time series. It is found that posterior odds ratios favor the independent Studentt linear model over the normal linear model, and that the posterior odds ratio in favor of difference stationarity over trend stationarity is often substantially less in the favored Studentt models.
Exact local Whittle estimation of fractional integration
, 2005
"... An exact form of the local Whittle likelihood is studied with the intent of developing a generalpurpose estimation procedure for the memory parameter (d) that does not rely on tapering or differencing prefilters. The resulting exact local Whittle estimator is shown to be consistent and to have the ..."
Abstract

Cited by 117 (16 self)
 Add to MetaCart
An exact form of the local Whittle likelihood is studied with the intent of developing a generalpurpose estimation procedure for the memory parameter (d) that does not rely on tapering or differencing prefilters. The resulting exact local Whittle estimator is shown to be consistent and to have the same N(0, 1/4) limit distribution for all values of d if the optimization covers an interval of width less than 9/2 and the initial value of the process is known.
Integer Factorization
, 2005
"... Many public key cryptosystems depend on the difficulty of factoring large integers. This thesis serves as a source for the history and development of integer factorization algorithms through time from trial division to the number field sieve. It is the first description of the number field sieve fro ..."
Abstract

Cited by 113 (8 self)
 Add to MetaCart
Many public key cryptosystems depend on the difficulty of factoring large integers. This thesis serves as a source for the history and development of integer factorization algorithms through time from trial division to the number field sieve. It is the first description of the number field sieve from an algorithmic point of view making it available to computer scientists for implementation. I have implemented the general number field sieve from this description and it is made publicly available from the Internet. This means that a reference implementation is made available for future developers which also can be used as a framework where some of the sub