Results 1  10
of
60
Matching as Nonparametric Preprocessing for Reducing Model Dependence
 in Parametric Causal Inference,” Political Analysis
, 2007
"... Although published works rarely include causal estimates from more than a few model specifications, authors usually choose the presented estimates from numerous trial runs readers never see. Given the often large variation in estimates across choices of control variables, functional forms, and other ..."
Abstract

Cited by 315 (44 self)
 Add to MetaCart
Although published works rarely include causal estimates from more than a few model specifications, authors usually choose the presented estimates from numerous trial runs readers never see. Given the often large variation in estimates across choices of control variables, functional forms, and other modeling assumptions, how can researchers ensure that the few estimates presented are accurate or representative? How do readers know that publications are not merely demonstrations that it is possible to find a specification that fits the author’s favorite hypothesis? And how do we evaluate or even define statistical properties like unbiasedness or mean squared error when no unique model or estimator even exists? Matching methods, which offer the promise of causal inference with fewer assumptions, constitute one possible way forward, but crucial results in this fastgrowing methodological
Matching Methods for Causal Inference: A Review and a Look Forward
"... Abstract. When estimating causal effects using observational data, it is desirable to replicate a randomized experiment as closely as possible by obtaining treated and control groups with similar covariate distributions. This goal can often be achieved by choosing wellmatched samples of the origina ..."
Abstract

Cited by 89 (1 self)
 Add to MetaCart
Abstract. When estimating causal effects using observational data, it is desirable to replicate a randomized experiment as closely as possible by obtaining treated and control groups with similar covariate distributions. This goal can often be achieved by choosing wellmatched samples of the original treated and control groups, thereby reducing bias due to the covariates. Since the 1970s, work on matching methods has examined how to best choose treated and control subjects for comparison. Matching methods are gaining popularity in fields such as economics, epidemiology, medicine and political science. However, until now the literature and related advice has been scattered across disciplines. Researchers who are interested in using matching methods—or developing methods related to matching—do not have a single place to turn to learn about past and current research. This paper provides a structure for thinking about matching methods and guidance on their use, coalescing the existing research (both old and new) and providing a summary of where the literature on matching methods is now and where it should be headed. Key words and phrases: Observational study, propensity scores, subclassification, weighting.
Attributing Effects to A Cluster Randomized GetOutTheVote Campaign.” Working Paper
, 2008
"... In a landmark study of political participation, A. Gerber and D. Green (2000) experimentally compared the effectiveness of various getoutthevote interventions. The study was wellpowered, conducted not in a lab but under field conditions, in the midst of a Congressional campaign; it used random ..."
Abstract

Cited by 29 (7 self)
 Add to MetaCart
(Show Context)
In a landmark study of political participation, A. Gerber and D. Green (2000) experimentally compared the effectiveness of various getoutthevote interventions. The study was wellpowered, conducted not in a lab but under field conditions, in the midst of a Congressional campaign; it used random assignment, in a field where randomization had been rare. As Fisher (1935) showed long ago, inferences from randomized designs can be essentially assumptionfree, making them uniquely suited to settle scientific debates. This study, however, prompted a contentious new debate after Imai (2005) tested and rejected the randomization model for Gerber and Green’s data. His alternate methodology reaches substantive conclusions contradicting those of Gerber and Green. It has since become clear that the experiment’s apparent lapses can be ascribed to clustered treatment assignment, rather than failures of randomization; it had randomized households, not individuals. What remains to be clarified is how this structure could have been accommodated by an analysis as sparing with
Why we (usually) don’t have to worry about multiple comparisons ∗
, 2008
"... The problem of multiple comparisons can disappear when viewed from a Bayesian perspective. We propose building multilevel models in the settings where multiple comparisons arise. These address the multiple comparisons problem and also yield more efficient estimates, especially in settings with low g ..."
Abstract

Cited by 29 (7 self)
 Add to MetaCart
The problem of multiple comparisons can disappear when viewed from a Bayesian perspective. We propose building multilevel models in the settings where multiple comparisons arise. These address the multiple comparisons problem and also yield more efficient estimates, especially in settings with low grouplevel variation, which is where multiple comparisons are a particular concern. Multilevel models perform partial pooling (shifting estimates toward each other), whereas classical procedures typically keep the centers of intervals stationary, adjusting for multiple comparisons by making the intervals wider (or, equivalently, adjusting the pvalues corresponding to intervals of fixed width). Multilevel estimates make comparisons more conservative, in the sense that intervals for comparisons are less likely to include zero; as a result, those comparisons that are made with confidence are more likely to be valid.
Optimal full matching and related designs via network flows
 Journal of Computational and Graphical Statistics
, 2006
"... In the matched analysis of an observational study, confounding on covariates X is addressed by comparing members of a distinguished group (Z = 1) to controls (Z =0) only when they belong to the same matched set. The better matchings, therefore, are those whose matched sets exhibit both dispersion in ..."
Abstract

Cited by 27 (4 self)
 Add to MetaCart
In the matched analysis of an observational study, confounding on covariates X is addressed by comparing members of a distinguished group (Z = 1) to controls (Z =0) only when they belong to the same matched set. The better matchings, therefore, are those whose matched sets exhibit both dispersion in Z and uniformity in X. For dispersion in Z, pair matching is best, creating matched sets that are equally balanced between the groups; but actual data place limits, often severe limits, on matched pairs’ uniformity in X. At the other extreme is full matching, the matched sets of which are as uniform in X as can be, while often so poorly dispersed in Z as to sacrifice efficiency. This article presents an algorithm for exploring the intermediate territory. Given requirements on matched sets ’ uniformity in X and dispersion in Z, the algorithm first decides the requirements ’ feasibility. In feasible cases, it furnishes a match that is optimal for Xuniformity among matches with Zdispersion as stipulated. To illustrate, we describe the algorithm’s use in a study comparing womens ’ to mens ’ working conditions; and we compare our method to a commonly used alternative, greedy matching, which is neither optimal nor as flexible but is algorithmically much simpler. The comparison finds meaningful advantages, in terms of both bias and efficiency, for our more studied approach.
An Introduction to Propensity Score Methods for Reducing the Effects of Confounding in Observational Studies
"... ..."
(Show Context)
Minimum distance matched sampling with fine balance in an observational study of treatment for ovarian cancer
 Journal of the American Statistical Association
, 2007
"... In observational studies of treatment effects, matched samples have traditionally been constructed using two tools, namely close matches on one or two key covariates and close matches on the propensity score to stochastically balance large numbers of covariates. Here we propose a third tool, fine ba ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
(Show Context)
In observational studies of treatment effects, matched samples have traditionally been constructed using two tools, namely close matches on one or two key covariates and close matches on the propensity score to stochastically balance large numbers of covariates. Here we propose a third tool, fine balance, obtained using the assignment algorithm in a new way. We use all three tools to construct a matched sample for an ongoing study of provider specialty in the treatment of ovarian cancer. Fine balance refers to exact balance of a nominal covariate, often one with many categories, but it does not require individually matched treated and control subjects for this variable. In the example the nominal variable has 72 = 9 × 8 categories formed from 9 possible years of diagnosis and 8 geographic locations or SEER sites. We obtain exact balance on the 72 categories and close individual matches on clinical stage, grade, year of diagnosis, and other variables using a distance, and stochastically balance a total of 61 covariates using a propensity score. Our approach finds an optimal match that minimizes a suitable distance subject to the constraint that fine balance is achieved. This is done by defining a special patterned distance matrix and passing it to a subroutine that solves the optimal assignment problem, which optimally pairs the rows and columns of a matrix using a polynomial time algorithm. In the example we used the function Proc Assign in SAS. A new theorem shows that with our patterned distance matrix, the assignment algorithm returns an optimal, finely balanced matched sample whenever one exists, and otherwise returns an infinite distance, indicating that no such matched sample exists.
Combining propensity score matching and groupbased trajectory analysis in an observational study
 Psychological Methods
, 2007
"... In a nonrandomized or observational study, propensity scores may be used to balance observed covariates and trajectory groups may be used to control baseline or pretreatment measures of outcome. The trajectory groups also aid in characterizing classes of subjects for whom no good matches are availab ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
In a nonrandomized or observational study, propensity scores may be used to balance observed covariates and trajectory groups may be used to control baseline or pretreatment measures of outcome. The trajectory groups also aid in characterizing classes of subjects for whom no good matches are available and to define substantively interesting groups between which treatment effects may vary. These and related methods are illustrated using data from a Montrealbased study. The effects on subsequent violence of gang joining at age 14 are studied while controlling for measured characteristics of boys prior to age 14. The boys are divided into trajectory groups based on violence from ages 11 to 13. Within trajectory group, joiners are optimally matched to a variable number of controls using propensity scores, Mahalanobis distances, and a combinatorial optimization algorithm. Use of variable ratio matching results in greater efficiency than pair matching and also greater bias reduction than matching at a fixed ratio. The possible impact of failing to adjust for an important but unmeasured covariate is examined using sensitivity analysis.
Best Practices in QuasiExperimental Designs: Matching Methods for Causal Inference
 in Best Practices in Quantitative Social Science, Edited by Jason Osborne. Thousand Oaks, CA: Sage
, 2007
"... any studies in social science that aim to estimate the effect of an intervention suffer from treatment selection bias, where the units who receive the treatment may have different characteristics from those in the control condition. These preexisting differences between the groups must be controlled ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
any studies in social science that aim to estimate the effect of an intervention suffer from treatment selection bias, where the units who receive the treatment may have different characteristics from those in the control condition. These preexisting differences between the groups must be controlled to obtain approximately unbiased estimates of the effects of interest. For example, in a study estimating the effect of bullying on high school graduation, students who were bullied are likely to be very different from students who were not bullied on a wide range of characteristics, such as socioeconomic status and academic performance, even before the bullying began. It is crucial to try to
The NeymanRubin Model of Causal Inference and Estimation via Matching Methods
 FORTHCOMING IN THE OXFORD HANDBOOK OF POLITICAL METHODOLOGY, JANET BOXSTEFFENSMEIER , HENRY BRADY , DAVID COLLIER (EDS.)
, 2007
"... ..."