Results 1 - 10
of
561
Estimation and Inference in Large Heterogeneous Panels with a Multifactor Error Structure
, 2004
"... This paper presents a new approach to estimation and inference in panel data models with a multifactor error structure where the unobserved common factors are (possibly) correlated with exogenously given individual-specific regressors, and the factor loadings differ over the cross section units. The ..."
Abstract
-
Cited by 383 (44 self)
- Add to MetaCart
This paper presents a new approach to estimation and inference in panel data models with a multifactor error structure where the unobserved common factors are (possibly) correlated with exogenously given individual-specific regressors, and the factor loadings differ over the cross section units. The basic idea behind the proposed estimation procedure is to filter the individual-specific regressors by means of (weighted) cross-section aggregates such that asymptotically as the cross-section dimension ( N) tends to infinity the differential effects of unobserved common factors are eliminated. The estimation procedure has the advantage that it can be computed by OLS applied to an auxiliary regression where the observed regressors are augmented by (weighted) cross sectional averages of the dependent variable and the individual specific regressors. Two different but related problems are addressed: one that concerns the coefficients of the individual-specific regressors, and the other that focusses on the mean of the individual coefficients assumed random. In both cases appropriate estimators, referred to as common correlated effects (CCE) estimators, are proposed and their asymptotic distribution as N →∞, with T (the time-series dimension) fixed or as N and T →∞(jointly) are derived under different regularity conditions. One important feature of the proposed CCE mean group (CCEMG) estimator is its invariance to the (unknown but fixed) number of unobserved common factors as N and T →∞(jointly). The small sample properties of the various pooled estimators are investigated by Monte Carlo experiments that confirm the theoretical derivations and show that the pooled estimators have generally satisfactory small sample properties even for relatively small values of N and T.
Dynamic panel estimation and homogeneity testing under cross section dependence. Cowles Foundation Discussion Paper #1362,
, 2002
"... Summary This paper deals with cross section dependence, homogeneity restrictions and small sample bias issues in dynamic panel regressions. To address the bias problem we develop a panel approach to median unbiased estimation that takes account of cross section dependence. The estimators given here ..."
Abstract
-
Cited by 166 (8 self)
- Add to MetaCart
Summary This paper deals with cross section dependence, homogeneity restrictions and small sample bias issues in dynamic panel regressions. To address the bias problem we develop a panel approach to median unbiased estimation that takes account of cross section dependence. The estimators given here considerably reduce the effects of bias and gain precision from estimating cross section error correlation. This paper also develops an asymptotic theory for tests of coefficient homogeneity under cross section dependence, and proposes a modified Hausman test to test for the presence of homogeneous unit roots. An orthogonalization procedure, based on iterated method of moments estimation, is developed to remove cross section dependence and permit the use of conventional and meta unit root tests with panel data. Some simulations investigating the finite sample performance of the estimation and test procedures are reported.
Are more data always better for factor analysis
- Journal of Econometrics
, 2006
"... Factors estimated from large macroeconomic panels are being used in an increasing number of applications. However, little is known about how the size and composition of the data affect the factor estimates. In this paper, we question whether it is possible to use more series to extract the factors a ..."
Abstract
-
Cited by 151 (0 self)
- Add to MetaCart
Factors estimated from large macroeconomic panels are being used in an increasing number of applications. However, little is known about how the size and composition of the data affect the factor estimates. In this paper, we question whether it is possible to use more series to extract the factors and that yet the resulting factors are less useful for forecasting, and the answer is yes. Such a problem tends to arise when the idiosyncratic errors are cross-correlated. It can also arise if forecasting power is provided by a factor that is dominant in a small dataset but is a dominated factor in a larger dataset. In a real time forecasting exercise, we find that factors extracted from as few as 40 pre-screened series often yield satisfactory or even better results than using all 147 series. Our simulation analysis is unique in that special attention is paid to cross-correlated idiosyncratic errors, and we also allow the factors to have weak loadings on groups of series. It thus allows us to better understand the properties of the principal components estimator in empirical applications.
Monetary Policy in a Data Rich Environment
- Journal of Monetary Economics
, 2002
"... Most empirical analyses of monetary policy have been confined to frameworks in which the Federal Reserve is implicitly assumed to exploit only a limited amount of information, despite the fact that the Fed actively monitors literally thousands of economic time series. This article explores the feasi ..."
Abstract
-
Cited by 149 (3 self)
- Add to MetaCart
Most empirical analyses of monetary policy have been confined to frameworks in which the Federal Reserve is implicitly assumed to exploit only a limited amount of information, despite the fact that the Fed actively monitors literally thousands of economic time series. This article explores the feasibility of incorporating richer information sets into the analysis, both positive and normative, of Fed policymaking. We employ a factor-model approach, developed by Stock and Watson (1999a,b), that permits the systematic information in large data sets to be summarized by relatively few estimated factors. With this framework, we reconfirm Stock and Watson’s result that the use of large data sets can improve forecast accuracy, and we show that this result does not seem to depend on the use of finally revised (as opposed to “real-time”) data. We estimate policy reaction functions for the Fed that take into account its data-rich environment and provide a test of the hypothesis that Fed actions are explained solely by its forecasts of inflation and real activity. Finally, we explore the possibility of developing an “expert system ” that could aggregate diverse information and provide benchmark policy settings. *Prepared for a conference on “Monetary Policy Under Incomplete Information”,
A PANIC Attack on Unit Roots and Cointegration
, 2003
"... This paper develops a new methodology that makes use of the factor structure of large dimensional panels to understand the nature of non-stationarity in the data. We refer to it as PANIC – a ‘Panel Analysis of Non-stationarity in Idiosyncratic and Common components’. PANIC consists of univariate and ..."
Abstract
-
Cited by 142 (3 self)
- Add to MetaCart
This paper develops a new methodology that makes use of the factor structure of large dimensional panels to understand the nature of non-stationarity in the data. We refer to it as PANIC – a ‘Panel Analysis of Non-stationarity in Idiosyncratic and Common components’. PANIC consists of univariate and panel tests with a number of novel features. It can detect whether the nonstationarity is pervasive, or variable-specific, or both. It tests the components of the data instead of the observed series. Inference is therefore more accurate when the components have different orders of integration. PANIC also permits the construction of valid panel tests even when cross-section correlation invalidates pooling of statistics constructed using the observed data. The key to PANIC is consistent estimation of the components even when the regressions are individually spurious. We provide a rigorous theory for estimation and inference. In Monte Carlo simulations, the tests have very good size and power. PANIC is applied to a panel of inflation series.
Panel Data Models with Interactive Fixed Effects
, 2005
"... This paper considers large N and large T panel data models with unobservable multiple interactive effects. These models are useful for both micro and macro econometric modelings. In earnings studies, for example, workers ’ motivation, persistence, and diligence combined to influence the earnings in ..."
Abstract
-
Cited by 125 (6 self)
- Add to MetaCart
This paper considers large N and large T panel data models with unobservable multiple interactive effects. These models are useful for both micro and macro econometric modelings. In earnings studies, for example, workers ’ motivation, persistence, and diligence combined to influence the earnings in addition to the usual argument of innate ability. In macroeconomics, the interactive effects represent unobservable common shocks and their heterogeneous responses over cross sections. Since the interactive effects are allowed to be correlated with the regressors, they are treated as fixed effects parameters to be estimated along with the common slope coefficients. The model is estimated by the least squares method, which provides the interactive-effects counterpart of the within estimator. We first consider model identification, and then derive the rate of convergence and the limiting distribution of the interactive-effects estimator of the common slope coefficients. The estimator is shown to be √ NT consistent. This rate is valid even in the presence of correlations and heteroskedasticities in both dimensions, a striking contrast with fixed T framework in which serial correlation and heteroskedasticity imply unidentification. The asymptotic distribution is not necessarily centered at zero. Biased corrected estimators are derived. We also derive the constrained estimator and its limiting distribution, imposing additivity coupled with interactive effects. The problem of testing additive versus interactive effects is also studied. We also derive identification conditions for models with grand mean, time-invariant regressors, and common regressors. It is shown that there exists a set of necessary and sufficient identification conditions for those models. Given identification, the rate of convergence and limiting results continue to hold. Key words and phrases: incidental parameters, additive effects, interactive effects, factor
Integer Factorization
, 2005
"... Many public key cryptosystems depend on the difficulty of factoring large integers. This thesis serves as a source for the history and development of integer factorization algorithms through time from trial division to the number field sieve. It is the first description of the number field sieve fro ..."
Abstract
-
Cited by 123 (8 self)
- Add to MetaCart
Many public key cryptosystems depend on the difficulty of factoring large integers. This thesis serves as a source for the history and development of integer factorization algorithms through time from trial division to the number field sieve. It is the first description of the number field sieve from an algorithmic point of view making it available to computer scientists for implementation. I have implemented the general number field sieve from this description and it is made publicly available from the Internet. This means that a reference implementation is made available for future developers which also can be used as a framework where some of the sub
Confidence intervals for diffusion index forecasts and inference for factor-augmented regressions
, 2003
"... We consider the situation when there is a large number of series, N,eachwithTob servations, and each series has some predictive ability for some variable of interest. A methodology of growing interest is first to estimate common factors from the panel of data by the method of principal components an ..."
Abstract
-
Cited by 107 (12 self)
- Add to MetaCart
We consider the situation when there is a large number of series, N,eachwithTob servations, and each series has some predictive ability for some variable of interest. A methodology of growing interest is first to estimate common factors from the panel of data by the method of principal components and then to augment an otherwise standard regression with the estimated factors. In this paper, we show that the least squares estimates obtained from these factor-augmented regressions are √ T consistent and asymptotically normal if √ T/N → 0. The conditional mean predicted by the estimated factors is min [ √ T � √ N] consistent and asymptotically normal. Except when T/N goes to zero, inference should take into account the effect of “estimated regressors ” on the estimated conditional mean. We present analytical formulas for prediction intervals that are valid regardless of the magnitude of N/T and that can also be used when the factors are nonstationary.
International business cycles: world, region, and country-specific factors
- The American Economic Review
, 2003
"... Abstract: The paper investigates the common dynamic properties of business cycle fluctuations across countries, regions, and the world. We employ a Bayesian dynamic latent factor model to estimate common components in the main macroeconomic aggregates (output, consumption and investment) in a sixty- ..."
Abstract
-
Cited by 103 (8 self)
- Add to MetaCart
Abstract: The paper investigates the common dynamic properties of business cycle fluctuations across countries, regions, and the world. We employ a Bayesian dynamic latent factor model to estimate common components in the main macroeconomic aggregates (output, consumption and investment) in a sixty-country sample covering seven regions of the world. In particular, we simultaneously estimate (i) a dynamic factor common to all aggregates, regions, and countries (the world factor); (ii) a set of 7 regional dynamic factors common across aggregates within a region; (iii) 60 country factors to capture dynamic comovement across aggregates within each country; and (iv) a component for each aggregate that captures idiosyncratic dynamics. We decompose the volatility in each aggregate into the fraction due to the world, region, country, and idiosyncratic components. The results indicate that the world factor is an important source of volatility for aggregates in most countries, providing evidence for a world business cycle. We find that the region-specific factor plays only a minor role in explaining fluctuations in economic activity. While the world and regional factors together account for a larger share of fluctuations in output than in consumption, the country-specific and idiosyncratic components play much larger roles in explaining investment dynamics. We also explore how the three aggregates in each country relate to the world, region and country factors, and document similarities and differences across regions, countries and aggregates. We link the empirical results to the economic structures of the countries in the sample.