Results 1  10
of
22
Measurement with some theory: using sign restrictions to evaluate business cycle models
, 2007
"... ..."
A test for comparing multiple misspecified conditional distributions, manuscript
, 2003
"... This paper introduces a test for the comparison of multiple misspecified conditional interval models, for the case of dependent observations+ Model accuracy is measured using a distributional analog of mean square error, in which the approximation error associated with a given model, say, model i, ..."
Abstract

Cited by 23 (10 self)
 Add to MetaCart
This paper introduces a test for the comparison of multiple misspecified conditional interval models, for the case of dependent observations+ Model accuracy is measured using a distributional analog of mean square error, in which the approximation error associated with a given model, say, model i, for a given interval, is measured by the expected squared difference between the conditional confidence interval under model i and the “true ” one+ When comparing more than two models, a “benchmark ” model is specified, and the test is constructed along the lines of the “reality check ” of White ~2000, Econometrica 68, 1097–1126!+ Valid asymptotic critical values are obtained via a version of the block bootstrap that properly captures the effect of parameter estimation error+ The results of a small Monte Carlo experiment indicate that the test does not have unreasonable finite sample properties, given small samples of 60 and 120 observations, although the results do suggest that larger samples should likely be used in empirical applications of the test+ 1.
Measurement with some theory: a new approach to evaluate business cycle models (with appendices)
, 2010
"... ..."
2007): “Econometric analysis of linearized singular dynamic stochastic general equilibrium models
 Journal of Econometrics
"... In this paper I propose an alternative to calibration of linearized singular dynamic stochastic general equilibrium models. Given an atheoretical econometric model as a representative of the data generating process, I will construct an information measure which compares the conditional distribution ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
In this paper I propose an alternative to calibration of linearized singular dynamic stochastic general equilibrium models. Given an atheoretical econometric model as a representative of the data generating process, I will construct an information measure which compares the conditional distribution of the econometric model variables with the corresponding singular conditional distribution of the theoretical model variables. The singularity problem will be solved by using convolutions of both distributions with a nonsingular distribution. This information measure will then be maximized to the deep parameters of the theoretical model, which links these parameters to the parameters of the econometric model and provides an alternative to calibration. This approach will be illustrated by an application to a linearized version of the stochastic growth model of King, Plosser and Rebelo.
How Sticky Is Sticky Enough? A Distributional and Impulse Response Analysis of New Keynesian DSGE Models
, 2006
"... In this paper, we add to the literature on the assessment of how well data simulated from newKeynesian dynamic stochastic general equilibrium (DSGE) models reproduce the dynamic features of historical data. In particular, we evaluate sticky price, sticky price with dynamic indexation, and sticky in ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
In this paper, we add to the literature on the assessment of how well data simulated from newKeynesian dynamic stochastic general equilibrium (DSGE) models reproduce the dynamic features of historical data. In particular, we evaluate sticky price, sticky price with dynamic indexation, and sticky information models using impulse response and correlation measures and via implementation of a distribution based approach for comparing (possibly) misspecified DSGE models using simulated and historical inflation and output gap data. One of our main findings is that for a standard level of stickiness (i.e. annual price or information adjustment), the sticky price model with indexation dominates other models. We also find that when a lower level of information and price stickiness is used (i.e. biannual adjustment), there is much less to choose between the models (see Bils and Klenow (2004) for evidence in favor of lower levels of stickiness). This finding is due to the fact that simulated and historical densities are “much” closer under biannual adjustment.
The Incremental Predictive Information Associated with Using New Keynesian DSGE Models vs. Simple Linear Econometric Models
 Oxford Bulletin of Economics and Statistics
, 2005
"... In this paper we construct output gap and inflation predictions using a variety of DSGE sticky price models. Predictive density accuracy tests related to the test discussed in Corradi and Swanson (2005a) as well as predictive accuracy tests due to Diebold and Mariano (1995) and West (1996) are used ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
In this paper we construct output gap and inflation predictions using a variety of DSGE sticky price models. Predictive density accuracy tests related to the test discussed in Corradi and Swanson (2005a) as well as predictive accuracy tests due to Diebold and Mariano (1995) and West (1996) are used to compare the alternative models. A number of simple time series prediction models (such as autoregressive and vector autoregressive (VAR) models) are additionally used as strawman models. Given that DSGE model restrictions are routinely nested within VAR models, the addition of our strawman models allows us to indirectly assess the usefulness of imposing theoretical restrictions implied by DSGE models on unrestricted econometric models. With respect to predictive density evaluation, our results suggest that the standard sticky price model discussed in Calvo (1983) is not outperformed by the same model augmented either with information or indexation, when used to predict the output gap. On the other hand, there are clear gains to using the more recent models when predicting inflation. Results based on mean square forecast error analysis are less clearcut, although the standard sticky price model fares best at our longest forecast horizon of 3 years, and performs relatively poorly at shorter horizons. When the strawman time series models are added to the picture, we find that the DSGE models still fare very well, often winning our forecast competitions, suggesting that theoretical macroeconomic restrictions yield useful additional information for forming macroeconomic forecasts.
A Simulation Based Specification Test for Diffusion Processes
, 2007
"... This paper makes two contributions. First, we outline a simple simulation based framework for constructing conditional distributions for multifactor and multidimensional diffusion processes, for the case where the functional form of the conditional density is unknown. The distributions can be used ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
This paper makes two contributions. First, we outline a simple simulation based framework for constructing conditional distributions for multifactor and multidimensional diffusion processes, for the case where the functional form of the conditional density is unknown. The distributions can be used, for example, to form predictive confidence intervals for time period t + τ, given information up to period t. Second, we use the simulation based approach to construct a test for the correct specification of a diffusion process. The suggested test is in the spirit of the conditional Kolmogorov test of Andrews (1997). However, in the present context the null conditional distribution is unknown and is replaced by its simulated counterpart. The limiting distribution of the test statistic is not nuisance parameter free. In light of this, asymptotically valid critical values are obtained via appropriate use of the block bootstrap. The suggested test has power against a larger class of alternatives than tests that are constructed using marginal distributions/densities, such as those in AïtSahalia (1996) and Corradi and Swanson (2005a). The findings of a small Monte Carlo experiment underscore the good finite sample properties of the proposed test, and an empirical illustration underscores the ease with which the proposed simulation and testing methodology can be applied.
Comparison of Misspecified Calibrated Models: The Minimum Distance Approach
, 2009
"... This paper presents testing procedures for comparison of misspecified calibrated models. The proposed tests are of the Vuongtype (Vuong, 1989; Rivers and Vuong, 2002). In our framework, an econometrician selects values for the parameters in order to match some characteristics of the data with those ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
This paper presents testing procedures for comparison of misspecified calibrated models. The proposed tests are of the Vuongtype (Vuong, 1989; Rivers and Vuong, 2002). In our framework, an econometrician selects values for the parameters in order to match some characteristics of the data with those implied by the competing theoretical models. We assume that all competing models are misspecified, and suggest a test for the null hypothesis that the models under consideration provide equivalent fit to the data characteristics, against the alternative that one of the models is a better approximation. We consider both nested and nonnested cases. We also relax the dependence of models’ ranking on the choice of weight matrix by suggesting averaged and supnorm procedures. The proposed method is illustrated by comparing standard cashinadvance and portfolio adjustment cost models in their ability to match the impulse responses of output and inflation to money growth shocks.
Predictive Density Construction and Accuracy Testing with Multiple Possibly Misspecified Diffusion Models
, 2009
"... This paper develops tests for comparing the accuracy of predictive densities derived from (possibly misspecified) diffusion models. In particular, we first outline a simple simulation based framework for constructing predictive densities for onefactor and stochastic volatility models. Then, we cons ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
This paper develops tests for comparing the accuracy of predictive densities derived from (possibly misspecified) diffusion models. In particular, we first outline a simple simulation based framework for constructing predictive densities for onefactor and stochastic volatility models. Then, we construct accuracy assessment tests that are in the spirit of Diebold and Mariano (1995) and White (2000). In order to establish the asymptotic properties of our tests, we also develop a recursive variant of the nonparametric simulated maximum likelihood estimator of Fermanian and Salanié (2004). In an empirical illustration, the predictive densities from several models of the onemonth federal funds rates are compared.
Higher Order Improvements for Approximate Estimators
, 2010
"... Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting “approximate” estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties of such approxi ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting “approximate” estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties of such approximate estimators at a low computational cost. The first two methods correct the objective function so as to remove the leading term of the bias due to the approximation. One variant provides an analytical bias adjustment, but it only works for estimators based on stochastic approximators, such as simulationbased estimators. Our second bias correction is based on ideas from the resampling literature; it eliminates the leading bias term for nonstochastic as well as stochastic approximators. Finally, we propose an iterative procedure where we use NewtonRaphson (NR) iterations based on a much finer degree of approximation. The NR step removes some or all of the additional bias and variance of the initial approximate estimator. A Monte Carlo simulation on the mixed logit model shows that noticeable improvements can be obtained rather cheaply.