Results 1 
2 of
2
Nonparametric estimation of average treatment effects under exogeneity: a review
 REVIEW OF ECONOMICS AND STATISTICS
, 2004
"... Recently there has been a surge in econometric work focusing on estimating average treatment effects under various sets of assumptions. One strand of this literature has developed methods for estimating average treatment effects for a binary treatment under assumptions variously described as exogen ..."
Abstract

Cited by 597 (26 self)
 Add to MetaCart
Recently there has been a surge in econometric work focusing on estimating average treatment effects under various sets of assumptions. One strand of this literature has developed methods for estimating average treatment effects for a binary treatment under assumptions variously described as exogeneity, unconfoundedness, or selection on observables. The implication of these assumptions is that systematic (for example, average or distributional) differences in outcomes between treated and control units with the same values for the covariates are attributable to the treatment. Recent analysis has considered estimation and inference for average treatment effects under weaker assumptions than typical of the earlier literature by avoiding distributional and functionalform assumptions. Various methods of semiparametric estimation have been proposed, including estimating the unknown regression functions, matching, methods using the propensity score such as weighting and blocking, and combinations of these approaches. In this paper I review the state of this
Randomized Experiments from Nonrandom Selection in the U.S. House Elections
 Journal of Econometrics
, 2008
"... This paper establishes the relatively weak conditions under which causal inferences from a regressiondiscontinuity (RD) analysis can be as credible as those from a randomized experiment, and hence under which the validity of the RD design can be tested by examining whether or not there is a discont ..."
Abstract

Cited by 355 (18 self)
 Add to MetaCart
This paper establishes the relatively weak conditions under which causal inferences from a regressiondiscontinuity (RD) analysis can be as credible as those from a randomized experiment, and hence under which the validity of the RD design can be tested by examining whether or not there is a discontinuity in any predetermined (or “baseline”) variables at the RD threshold. Specifically, consider a standard treatment evaluation problem in which treatment is assigned to an individual if and only if V> v0, but where v0 is a known threshold, and V is observable. V can depend on the individual’s characteristics and choices, but there is also a random chance element: for each individual, there exists a welldefined probability distribution for V. The density function – allowed to differ arbitrarily across the population – is assumed to be continuous. It is formally established that treatment status here is as good as randomized in a local neighborhood of V = v0. These ideas are illustrated in an analysis of U.S. House elections, where the inherent uncertainty in the final vote count is plausible, which would imply that the party that wins is essentially randomized among elections decided by a narrow margin. The evidence is consistent with this prediction, which is then used to generate “nearexperimental ” causal estimates of the electoral advantage to incumbency.