Results 1  10
of
62
Nonparametric estimation of average treatment effects under exogeneity: a review
 REVIEW OF ECONOMICS AND STATISTICS
, 2004
"... Recently there has been a surge in econometric work focusing on estimating average treatment effects under various sets of assumptions. One strand of this literature has developed methods for estimating average treatment effects for a binary treatment under assumptions variously described as exogen ..."
Abstract

Cited by 597 (26 self)
 Add to MetaCart
(Show Context)
Recently there has been a surge in econometric work focusing on estimating average treatment effects under various sets of assumptions. One strand of this literature has developed methods for estimating average treatment effects for a binary treatment under assumptions variously described as exogeneity, unconfoundedness, or selection on observables. The implication of these assumptions is that systematic (for example, average or distributional) differences in outcomes between treated and control units with the same values for the covariates are attributable to the treatment. Recent analysis has considered estimation and inference for average treatment effects under weaker assumptions than typical of the earlier literature by avoiding distributional and functionalform assumptions. Various methods of semiparametric estimation have been proposed, including estimating the unknown regression functions, matching, methods using the propensity score such as weighting and blocking, and combinations of these approaches. In this paper I review the state of this
Efficient Estimation of Average Treatment Effects Using the Estimated Propensity Score
, 2000
"... We are interested in estimating the average effect of a binary treatment on a scalar outcome. If assignment to the treatment is independent of the potential outcomes given pretreatment variables, biases associated with simple treatmentcontrol average comparisons can be removed by adjusting for diff ..."
Abstract

Cited by 398 (35 self)
 Add to MetaCart
We are interested in estimating the average effect of a binary treatment on a scalar outcome. If assignment to the treatment is independent of the potential outcomes given pretreatment variables, biases associated with simple treatmentcontrol average comparisons can be removed by adjusting for differences in the pretreatmentvariables. Rosenbaum and Rubin (1983, 1984) show that adjusting solely for differences between treated and control units in a scalar function of the pretreatment variables, the propensity score, also removes the entire bias associated with differences in pretreatment variables. Thus it is possible to obtain unbiased estimates of the treatment effect without conditioning on a possibly highdimensional vector of pretreatment variables. Although adjusting for the propensity score removes all the bias, this can come at the expense of efficiency. We show that weighting with the inverse of a nonparametric estimate of the propensity score, rather than the true propensity scor...
Large Sample Properties of Matching Estimators for Average Treatment Effects
 ECONOMETRICA 74,235267
, 2006
"... Matching estimators for average treatment effects are widely used in evaluation research despite the fact that their large sample properties have not been established in many cases. The absence of formal results in this area may be partly due to the fact that standard asymptotic expansions do not ap ..."
Abstract

Cited by 311 (20 self)
 Add to MetaCart
Matching estimators for average treatment effects are widely used in evaluation research despite the fact that their large sample properties have not been established in many cases. The absence of formal results in this area may be partly due to the fact that standard asymptotic expansions do not apply to matching estimators with a fixed number of matches because such estimators are highly nonsmooth functionals of the data. In this article we develop new methods for analyzing the large sample properties of matching estimators and establish a number of new results. We focus on matching with replacement with a fixed number of matches. First, we show that matching estimators are not N1/2consistent in general and describe conditions under which matching estimators do attain N1/2consistency. Second, we show that even in settings where matching estimators are N1/2consistent, simple matching estimators with a fixed number of matches do not attain the semiparametric efficiency bound. Third, we provide a consistent estimator for the large sample variance that does not require consistent nonparametric estimation of unknown functions. Software for implementing these methods is available in Matlab, Stata, and R.
Adjusting for nonignorable dropout using semiparametric nonresponse models (with discussion
 Journal of the American Statistical Association
, 1999
"... Consider a study whose design calls for the study subjects to be followed from enrollment (time t = 0) to time t = T,at which point a primary endpoint of interest Y is to be measured. The design of the study also calls for measurements on a vector V(t) of covariates to be made at one or more times t ..."
Abstract

Cited by 115 (14 self)
 Add to MetaCart
(Show Context)
Consider a study whose design calls for the study subjects to be followed from enrollment (time t = 0) to time t = T,at which point a primary endpoint of interest Y is to be measured. The design of the study also calls for measurements on a vector V(t) of covariates to be made at one or more times t during the interval [0,T). We are interested in making inferences about the marginal mean µ0 of Y when some subjects drop out of the study at random times Q prior to the common fixed end of followup time T. The purpose of this article is to show how to make inferences about µ0 when the continuous dropout time Q is modeled semiparametrically and no restrictions are placed on the joint distribution of the outcome and other measured variables. In particular, we consider two models for the conditional hazard of dropout given ( ¯ V(T), Y), where ¯ V(t) denotes the history of the process V(t) through time t, t ∈ [0,T). In the first model, we assume that λQ(t  ¯ V(T), Y) = λ0(t  ¯ V(t)) exp(α0Y), where α0 is a scalar parameter and λ0(t  ¯ V(t)) is an unrestricted positive function of t and the process ¯ V(t). When the process ¯ V(t) is high dimensional, estimation in this model is not feasible with moderate sample sizes, due to the curse of dimensionality. For such situations, we consider a second model that imposes the additional restriction that λ0(t  ¯ V(t)) = λ0(t) exp(γ ′ 0W(t)), where λ0(t) is an unspecified baseline hazard function, W(t) = w(t, ¯ V(t)), w(·, ·) is a known function that maps (t, ¯ V(t)) to Rq, and γ0 is a q × 1 unknown parameter vector. When α0 � = 0, then dropout is nonignorable. On account of identifiability problems, joint estimation of the mean µ0 of Y and the selection bias parameter α0 may be difficult or impossible. Therefore, we propose regarding the selection bias parameter α0 as known, rather than estimating it from the data. We then perform a sensitivity analysis to see how inference about µ0 changes as we vary α0 over a plausible range of values. We apply our approach to the analysis of ACTG 175, an AIDS clinical trial. KEY WORDS: Augmented inverse probability of censoring weighted estimators; Cox proportional hazards model; Identification;
Estimation of Causal Effects Using Propensity Score Weighting: An Application to Data
 on Right Heart Catheterization,” Health Services and Outcomes Research Methodology
, 2001
"... Abstract. We consider methods for estimating causal effects of treatments when treatment assignment is unconfounded with outcomes conditional on a possibly large set of covariates. Robins and Rotnitzky (1995) suggested combining regression adjustment with weighting based on the propensity score (Ros ..."
Abstract

Cited by 80 (3 self)
 Add to MetaCart
Abstract. We consider methods for estimating causal effects of treatments when treatment assignment is unconfounded with outcomes conditional on a possibly large set of covariates. Robins and Rotnitzky (1995) suggested combining regression adjustment with weighting based on the propensity score (Rosenbaum and Rubin, 1983). We adopt this approach, allowing for a flexible specification of both the propensity score and the regression function. We apply these methods to data on the effects of right heart catheterization (RHC) studied in Connors et al (1996), and we find that our estimator gives stable estimates over a wide range of values for the two parameters governing the selection of variables.
Optimal Structural Nested Models for Optimal Sequential Decisions
 In Proceedings of the Second Seattle Symposium on Biostatistics
, 2004
"... ABSTRACT: I describe two new methods for estimating the optimal treatment regime (equivalently, protocol, plan or strategy) from very high dimesional observational and experimental data: (i) gestimation of an optimal doubleregime structural nested mean model (drSNMM) and (ii) gestimation of a sta ..."
Abstract

Cited by 60 (5 self)
 Add to MetaCart
(Show Context)
ABSTRACT: I describe two new methods for estimating the optimal treatment regime (equivalently, protocol, plan or strategy) from very high dimesional observational and experimental data: (i) gestimation of an optimal doubleregime structural nested mean model (drSNMM) and (ii) gestimation of a standard single regime SNMM combined with sequential dynamicprogramming (DP) regression. These methods are compared to certain regression methods found in the sequential decision and reinforcement learning literatures and to the regret modelling methods of Murphy (2003). I consider both Bayesian and frequentist inference. In particular, I propose a novel “Bayesfrequentist compromise ” that combines honest subjective non or semiparametric Bayesian inference with good frequentist behavior, even in cases where the model is so large and the likelihood function so complex that standard (uncompromised) Bayes procedures have poor frequentist performance. 1
The interplay of bayesian and frequentist analysis
 Statist. Sci
, 2004
"... Statistics has struggled for nearly a century over the issue of whether the Bayesian or frequentist paradigm is superior. This debate is far from over and, indeed, should continue, since there are fundamental philosophical and pedagogical issues at stake. At the methodological level, however, the fi ..."
Abstract

Cited by 48 (0 self)
 Add to MetaCart
Statistics has struggled for nearly a century over the issue of whether the Bayesian or frequentist paradigm is superior. This debate is far from over and, indeed, should continue, since there are fundamental philosophical and pedagogical issues at stake. At the methodological level, however, the fight has become considerably muted, with the recognition that each approach has a great deal to contribute to statistical practice and each is actually essential for full development of the other approach. In this article, we embark upon a rather idiosyncratic walk through some of these issues. Key words and phrases: Admissibility; Bayesian model checking; conditional frequentist; confidence intervals; consistency; coverage; design; hierarchical models; nonparametric
Maternal employment and child development: A fresh look using newer methods
 Developmental Psychology
, 2005
"... The employment rate for mothers with young children has increased dramatically over the past 25 years. Estimating the effects of maternal employment on children’s development is challenged by selection bias and the missing data endemic to most policy research. To address these issues, this study use ..."
Abstract

Cited by 42 (1 self)
 Add to MetaCart
(Show Context)
The employment rate for mothers with young children has increased dramatically over the past 25 years. Estimating the effects of maternal employment on children’s development is challenged by selection bias and the missing data endemic to most policy research. To address these issues, this study uses propensity score matching and multiple imputation. The authors compare outcomes across 4 maternal employment patterns: no work in first 3 years postbirth, work only after 1st year, parttime work in 1st year, and fulltime work in 1st year. Our results demonstrate small but significant negative effects of maternal employment on children’s cognitive outcomes for fulltime employment in the 1st year postbirth as compared with employment postponed until after the 1st year. Multiple imputation yields noticeably different estimates as compared with a complete case approach for many measures. Differences between results from propensity score approaches and regression modeling are often minimal.
The Econometrics of DSGE Models
, 2009
"... In this paper, I review the literature on the formulation and estimation of dynamic stochastic general equilibrium (DSGE) models with a special emphasis on Bayesian methods. First, I discuss the evolution of DSGE models over the last couple of decades. Second, I explain why the profession has decide ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
In this paper, I review the literature on the formulation and estimation of dynamic stochastic general equilibrium (DSGE) models with a special emphasis on Bayesian methods. First, I discuss the evolution of DSGE models over the last couple of decades. Second, I explain why the profession has decided to estimate these models using Bayesian methods. Third, I brie‡y introduce some of the techniques required to compute and estimate these models. Fourth, I illustrate the techniques under consideration by estimating a benchmark DSGE model with real and nominal rigidities. I conclude by o¤ering some pointers for future research.