Results 1  10
of
147
Measuring the Effects of Monetary Policy: A FactorAugmented Vector Autoregressive (FAVAR) Approach,” NBER Working Paper No
"... Structural vector autoregressions (VARs) are widely used to trace out the effect of monetary policy innovations on the economy. However, the sparse information sets typically used in these empirical models lead to at least three potential problems with the results. First, to the extent that centra ..."
Abstract

Cited by 184 (0 self)
 Add to MetaCart
Structural vector autoregressions (VARs) are widely used to trace out the effect of monetary policy innovations on the economy. However, the sparse information sets typically used in these empirical models lead to at least three potential problems with the results. First, to the extent that central banks and the private sector have information not reflected in the VAR, the measurement of policy innovations is likely to be contaminated. Second, the choice of a specific data series to represent a general economic concept such as “real activity ” is often arbitrary to some degree. Third, impulse responses can be observed only for the included variables, which generally constitute only a small subset of the variables that the researcher and policymaker care about. In this paper we investigate one potential solution to this limited information problem, which combines the standard structural VAR analysis with recent developments in factor analysis for large data sets. We find that the information that our factoraugmented VAR (FAVAR) methodology exploits is indeed important to properly identify the monetary transmission mechanism. Overall, our results provide a comprehensive and coherent picture of the effect of monetary policy on the economy. I.
Testing for a Unit Root in Panels with Dynamic Factors
 Journal of Econometrics
, 2002
"... This paper studies testing for a unit root for large n and T panels in which the crosssectional units are correlated. To model this crosssectional correlation, we assume that the data is generated by an unknown number of unobservable common factors. We propose unit root tests in this environment a ..."
Abstract

Cited by 182 (6 self)
 Add to MetaCart
This paper studies testing for a unit root for large n and T panels in which the crosssectional units are correlated. To model this crosssectional correlation, we assume that the data is generated by an unknown number of unobservable common factors. We propose unit root tests in this environment and derive their (Gaussian) asymptotic distribution under the null hypothesis of a unit root and local alternatives. We show that these tests have significant asympotitic power when the model has no incidental trends. However, when there are incidental trends in the model and it is necessary to remove heterogeneous deterministic components, we show that these tests have no power against the same local alternatives. Through Monte Carlo simulations, we provide evidence on the finite sample properties of these new tests. 1
Has the Business Cycle Changed and Why?
, 2002
"... From 19601983, the standard deviation of annual growth rates in real GDP in the United States was 2.7%. From 19842001, the corresponding standard deviation was 1.6%. This paper investigates this large drop in the cyclical volatility OF real economic.activity. The paper has two objectives. The fi ..."
Abstract

Cited by 167 (3 self)
 Add to MetaCart
From 19601983, the standard deviation of annual growth rates in real GDP in the United States was 2.7%. From 19842001, the corresponding standard deviation was 1.6%. This paper investigates this large drop in the cyclical volatility OF real economic.activity. The paper has two objectives. The first is to provide a comprehensive characterization of the decline in volatility using a large number of U.S. economic time series and a variety of methods designed to describe timevarying time series processes. In so doing, the paper reviews the literature on the moderation and attempts to resolve some of its disagreements and discrepancies. The second objective is to provide new evidence on the quantitative importance of various explanations for this "great moderation". Taken together, we estimate that the moderation in volatility is attributable to a combination of improved policy (2030%), identifiable good luck in the form of productivity and commodity price shocks (2030%), and other unknown forms of good luck that manifest themselves as smaller reducedform forecast errors (4060%).
Do Macro variables, asset markets, or surveys forecast ination better?Journal of Monetary
 Economics
, 2007
"... NOTE: Staff working papers in the Finance and Economics Discussion Series (FEDS) are preliminary materials circulated to stimulate discussion and critical comment. The analysis and conclusions set forth are those of the authors and do not indicate concurrence by other members of the research staff ..."
Abstract

Cited by 153 (8 self)
 Add to MetaCart
NOTE: Staff working papers in the Finance and Economics Discussion Series (FEDS) are preliminary materials circulated to stimulate discussion and critical comment. The analysis and conclusions set forth are those of the authors and do not indicate concurrence by other members of the research staff or the Board of Governors. References in publications to the Finance and Economics Discussion Series (other than acknowledgement) should be cleared with the author(s) to protect the tentative character of these papers.
Are more data always better for factor analysis
 Journal of Econometrics
, 2006
"... Factors estimated from large macroeconomic panels are being used in an increasing number of applications. However, little is known about how the size and composition of the data affect the factor estimates. In this paper, we question whether it is possible to use more series to extract the factors a ..."
Abstract

Cited by 148 (0 self)
 Add to MetaCart
Factors estimated from large macroeconomic panels are being used in an increasing number of applications. However, little is known about how the size and composition of the data affect the factor estimates. In this paper, we question whether it is possible to use more series to extract the factors and that yet the resulting factors are less useful for forecasting, and the answer is yes. Such a problem tends to arise when the idiosyncratic errors are crosscorrelated. It can also arise if forecasting power is provided by a factor that is dominant in a small dataset but is a dominated factor in a larger dataset. In a real time forecasting exercise, we find that factors extracted from as few as 40 prescreened series often yield satisfactory or even better results than using all 147 series. Our simulation analysis is unique in that special attention is paid to crosscorrelated idiosyncratic errors, and we also allow the factors to have weak loadings on groups of series. It thus allows us to better understand the properties of the principal components estimator in empirical applications.
Integer Factorization
, 2005
"... Many public key cryptosystems depend on the difficulty of factoring large integers. This thesis serves as a source for the history and development of integer factorization algorithms through time from trial division to the number field sieve. It is the first description of the number field sieve fro ..."
Abstract

Cited by 113 (8 self)
 Add to MetaCart
Many public key cryptosystems depend on the difficulty of factoring large integers. This thesis serves as a source for the history and development of integer factorization algorithms through time from trial division to the number field sieve. It is the first description of the number field sieve from an algorithmic point of view making it available to computer scientists for implementation. I have implemented the general number field sieve from this description and it is made publicly available from the Internet. This means that a reference implementation is made available for future developers which also can be used as a framework where some of the sub
Confidence intervals for diffusion index forecasts and inference for factoraugmented regressions
, 2003
"... We consider the situation when there is a large number of series, N,eachwithTob servations, and each series has some predictive ability for some variable of interest. A methodology of growing interest is first to estimate common factors from the panel of data by the method of principal components an ..."
Abstract

Cited by 102 (12 self)
 Add to MetaCart
We consider the situation when there is a large number of series, N,eachwithTob servations, and each series has some predictive ability for some variable of interest. A methodology of growing interest is first to estimate common factors from the panel of data by the method of principal components and then to augment an otherwise standard regression with the estimated factors. In this paper, we show that the least squares estimates obtained from these factoraugmented regressions are √ T consistent and asymptotically normal if √ T/N → 0. The conditional mean predicted by the estimated factors is min [ √ T � √ N] consistent and asymptotically normal. Except when T/N goes to zero, inference should take into account the effect of “estimated regressors ” on the estimated conditional mean. We present analytical formulas for prediction intervals that are valid regardless of the magnitude of N/T and that can also be used when the factors are nonstationary.
Determining the number of primitive shocks in factor models
 Journal of Business and Economic Statistics
, 2007
"... A widely held but untested assumption underlying macroeconomic analysis is that the number of shocks driving economic fluctuations, q, is small. In this article we associate q with the number of dynamic factors in a large panel of data. We propose a methodology to determine q without having to estim ..."
Abstract

Cited by 91 (0 self)
 Add to MetaCart
(Show Context)
A widely held but untested assumption underlying macroeconomic analysis is that the number of shocks driving economic fluctuations, q, is small. In this article we associate q with the number of dynamic factors in a large panel of data. We propose a methodology to determine q without having to estimate the dynamic factors. We first estimate a VAR in r static factors, where the factors are obtained by applying the method of principal components to a large panel of data, then compute the eigenvalues of the residual covariance or correlation matrix. We then test whether their eigenvalues satisfy an asymptotically shrinking bound that reflects sampling error. We apply the procedure to determine the number of primitive shocks in a large number of macroeconomic time series. An important aspect of the present analysis is to make precise the relationship between the dynamic factors and the static factors, which is a result of independent interest. KEY WORDS: Common shocks; Dynamic factor model; Number of factors; Principal components