Results 1  10
of
24,544
Rates of convergence in conditional covariance matrix estimation
"... 1Institut de Mathématiques de Toulouse. 2Institut National des Sciences appliquées de Toulouse. Let X ∈ Rp and Y ∈ R two random variables. In this paper we are interested in the estimation of the conditional covariance matrix Cov (E [XY]). To this end, we will use a plugin kernel based algorithm. ..."
Abstract
 Add to MetaCart
1Institut de Mathématiques de Toulouse. 2Institut National des Sciences appliquées de Toulouse. Let X ∈ Rp and Y ∈ R two random variables. In this paper we are interested in the estimation of the conditional covariance matrix Cov (E [XY]). To this end, we will use a plugin kernel based algorithm
detection using a wellconditioned covariance matrix estimate
"... Adaptation of the optimal fingerprint method for climate change ..."
Large Scale Conditional Covariance Matrix Modeling, Estimation and Testing,” University of California at San Diego working paper
, 1994
"... A new representation of the diagonal Vech model is given using the Hadamard product. Sufficient conditions on parameter matrices are provided to ensure the positive definiteness of covariance matrices from the new representation. Based on this, some new and simple models are discussed. A set of diag ..."
Abstract

Cited by 37 (2 self)
 Add to MetaCart
A new representation of the diagonal Vech model is given using the Hadamard product. Sufficient conditions on parameter matrices are provided to ensure the positive definiteness of covariance matrices from the new representation. Based on this, some new and simple models are discussed. A set
A HeteroskedasticityConsistent Covariance Matrix Estimator And A Direct Test For Heteroskedasticity
, 1980
"... This paper presents a parameter covariance matrix estimator which is consistent even when the disturbances of a linear regression model are heteroskedastic. This estimator does not depend on a formal model of the structure of the heteroskedasticity. By comparing the elements of the new estimator ..."
Abstract

Cited by 3211 (5 self)
 Add to MetaCart
This paper presents a parameter covariance matrix estimator which is consistent even when the disturbances of a linear regression model are heteroskedastic. This estimator does not depend on a formal model of the structure of the heteroskedasticity. By comparing the elements of the new estimator
Fit indices in covariance structure modeling: Sensitivity to underparameterized model misspecification
 Psychological Methods
, 1998
"... This study evaluated the sensitivity of maximum likelihood (ML), generalized least squares (GLS), and asymptotic distributionfree (ADF)based fit indices to model misspecification, under conditions that varied sample size and distribution. The effect of violating assumptions of asymptotic robustn ..."
Abstract

Cited by 543 (0 self)
 Add to MetaCart
This study evaluated the sensitivity of maximum likelihood (ML), generalized least squares (GLS), and asymptotic distributionfree (ADF)based fit indices to model misspecification, under conditions that varied sample size and distribution. The effect of violating assumptions of asymptotic
Exact Matrix Completion via Convex Optimization
, 2008
"... We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfe ..."
Abstract

Cited by 873 (26 self)
 Add to MetaCart
by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold
High dimensional graphs and variable selection with the Lasso
 ANNALS OF STATISTICS
, 2006
"... The pattern of zero entries in the inverse covariance matrix of a multivariate normal distribution corresponds to conditional independence restrictions between variables. Covariance selection aims at estimating those structural zeros from data. We show that neighborhood selection with the Lasso is a ..."
Abstract

Cited by 736 (22 self)
 Add to MetaCart
The pattern of zero entries in the inverse covariance matrix of a multivariate normal distribution corresponds to conditional independence restrictions between variables. Covariance selection aims at estimating those structural zeros from data. We show that neighborhood selection with the Lasso
Sequential data assimilation with a nonlinear quasigeostrophic model using Monte Carlo methods to forecast error statistics
 J. Geophys. Res
, 1994
"... . A new sequential data assimilation method is discussed. It is based on forecasting the error statistics using Monte Carlo methods, a better alternative than solving the traditional and computationally extremely demanding approximate error covariance equation used in the extended Kalman filter. The ..."
Abstract

Cited by 800 (23 self)
 Add to MetaCart
covariance equation are avoided because storage and evolution of the error covariance matrix itself are not needed. The results are also better than what is provided by the extended Kalman filter since there is no closure problem and the quality of the forecast error statistics therefore improves. The method
The Central Role of the Propensity Score in Observational Studies for Causal Effects.
 Biometrika
, 1983
"... SUMMARY The propensity score is the conditional probability of assignment to a particular treatment given a vector of observed covariates. Both large and small sample theory show that adjustment for the scalar propensity score is sufficient to remove bias due to all observed covariates. Application ..."
Abstract

Cited by 2779 (26 self)
 Add to MetaCart
SUMMARY The propensity score is the conditional probability of assignment to a particular treatment given a vector of observed covariates. Both large and small sample theory show that adjustment for the scalar propensity score is sufficient to remove bias due to all observed covariates
Axiomatic quantum field theory in curved spacetime
, 2008
"... The usual formulations of quantum field theory in Minkowski spacetime make crucial use of features—such as Poincare invariance and the existence of a preferred vacuum state—that are very special to Minkowski spacetime. In order to generalize the formulation of quantum field theory to arbitrary globa ..."
Abstract

Cited by 689 (18 self)
 Add to MetaCart
and covariantly constructed from the spacetime metric), a microlocal spectrum condition, an "associativity" condition, and the requirement that the coefficient of the identity in the OPE of the product of a field with its adjoint have positive scaling degree. We prove curved spacetime versions
Results 1  10
of
24,544