Results 1 - 10
of
8,745
High dimensional covariance matrix estimation using a factor model
- Princeton Univ
, 2008
"... ar ..."
Variance swaps
, 2013
"... 2 High-dimensional covariance matrix estimators 8 2.1 Literature review............................. 8 2.2 Theoretical outline............................ 12 ..."
Abstract
- Add to MetaCart
2 High-dimensional covariance matrix estimators 8 2.1 Literature review............................. 8 2.2 Theoretical outline............................ 12
High dimensional graphs and variable selection with the Lasso
- ANNALS OF STATISTICS
, 2006
"... The pattern of zero entries in the inverse covariance matrix of a multivariate normal distribution corresponds to conditional independence restrictions between variables. Covariance selection aims at estimating those structural zeros from data. We show that neighborhood selection with the Lasso is a ..."
Abstract
-
Cited by 736 (22 self)
- Add to MetaCart
is a computationally attractive alternative to standard covariance selection for sparse high-dimensional graphs. Neighborhood selection estimates the conditional independence restrictions separately for each node in the graph and is hence equivalent to variable selection for Gaussian linear models. We
Nonparametric Stein-type Shrinkage Covariance Matrix Estimators in High-Dimensional Settings
"... Estimating a covariance matrix is an important task in applications where the number of vari-ables is larger than the number of observations. In the literature, shrinkage approaches for estimat-ing a high-dimensional covariance matrix are employed to circumvent the limitations of the sample covarian ..."
Abstract
- Add to MetaCart
Estimating a covariance matrix is an important task in applications where the number of vari-ables is larger than the number of observations. In the literature, shrinkage approaches for estimat-ing a high-dimensional covariance matrix are employed to circumvent the limitations of the sample
A Heteroskedasticity-Consistent Covariance Matrix Estimator And A Direct Test For Heteroskedasticity
, 1980
"... This paper presents a parameter covariance matrix estimator which is consistent even when the disturbances of a linear regression model are heteroskedastic. This estimator does not depend on a formal model of the structure of the heteroskedasticity. By comparing the elements of the new estimator ..."
Abstract
-
Cited by 3211 (5 self)
- Add to MetaCart
This paper presents a parameter covariance matrix estimator which is consistent even when the disturbances of a linear regression model are heteroskedastic. This estimator does not depend on a formal model of the structure of the heteroskedasticity. By comparing the elements of the new estimator
Covariance Matrix Estimation in Time Series
, 2011
"... Covariances play a fundamental role in the theory of time series and they are critical quantities that are needed in both spectral and time domain analysis. Estimation of covariance matrices is needed in the construction of confidence regions for unknown parameters, hypothesis testing, principal com ..."
Abstract
- Add to MetaCart
component analysis, prediction, discriminant analysis among others. In this paper we consider both low- and high-dimensional covariance matrix estimation problems and present a review for asymptotic properties of sample covariances and covariance matrix estimates. In particular, we shall provide
Group Lasso Estimation of High-dimensional Covariance Matrices Jérémie Bigot Institut de Mathématiques de Toulouse
"... In this paper, we consider the Group Lasso estimator of the covariance matrix of a stochastic process corrupted by an additive noise. We propose to estimate the covariance matrix in a highdimensional setting under the assumption that the process has a sparse representation in a large dictionary of b ..."
Abstract
- Add to MetaCart
of basis functions. Using a matrix regression model, we propose a new methodology for high-dimensional covariance matrix estimation based on empirical contrast regularization by a group Lasso penalty. Using such a penalty, the method selects a sparse set of basis functions in the dictionary used
Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics
- J. Geophys. Res
, 1994
"... . A new sequential data assimilation method is discussed. It is based on forecasting the error statistics using Monte Carlo methods, a better alternative than solving the traditional and computationally extremely demanding approximate error covariance equation used in the extended Kalman filter. The ..."
Abstract
-
Cited by 800 (23 self)
- Add to MetaCart
covariance equation are avoided because storage and evolution of the error covariance matrix itself are not needed. The results are also better than what is provided by the extended Kalman filter since there is no closure problem and the quality of the forecast error statistics therefore improves. The method
New results in linear filtering and prediction theory
- TRANS. ASME, SER. D, J. BASIC ENG
, 1961
"... A nonlinear differential equation of the Riccati type is derived for the covariance matrix of the optimal filtering error. The solution of this "variance equation " completely specifies the optimal filter for either finite or infinite smoothing intervals and stationary or nonstationary sta ..."
Abstract
-
Cited by 607 (0 self)
- Add to MetaCart
A nonlinear differential equation of the Riccati type is derived for the covariance matrix of the optimal filtering error. The solution of this "variance equation " completely specifies the optimal filter for either finite or infinite smoothing intervals and stationary or nonstationary
How much should we trust differences-in-differences estimates?
, 2003
"... Most papers that employ Differences-in-Differences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in state-level data on femal ..."
Abstract
-
Cited by 828 (1 self)
- Add to MetaCart
into account the auto-correlation of the data) works well when the number of states is large enough. Two corrections based on asymptotic approximation of the variance-covariance matrix work well for moderate numbers of states and one correction that collapses the time series information into a “pre” and “post
Results 1 - 10
of
8,745