Comparative Effectiveness Research

The overall goal of the Comparative Effectiveness Research (CER) team is to increase the use of comparative effectiveness applications in interdisciplinary research using observational data. To this end, the component provides expertise, support, and teaching materials on comparative effectiveness methodologies. The CER team has extensive experience with effectiveness studies and using administrative and survey data for grant proposals including NIH and private donors.


  • Consultations – Comparative effectiveness methodology consultations and consultations on the use of economic tools in translational settings are available to all clinical and translational researchers at Children’s National and George Washington University.
  • Training – Lectures on comparative effectiveness methodologies using Instrumental Variable and Propensity Score Methods are available.

The CER team created two lectures that provide an introduction to CER and methods for CER using observational data (below). The first lecture discusses the essence of comparative effectiveness research (CER), the needs for CER, the history of CER policy in the U.S. and a general overview of CER methods. The second lecture provides information about two generally applied CER methods for observational research studies, namely propensity scores and instrumental variables.

Short summaries of these lectures are available by clicking on the following links:

CER References:
  1. Benner, J.S., Morrison, M.R., Karnes, E.K., Kocot, S.L., & McClellan, M. (2010). An Evaluation of Recent Federal Spending on Comparative Effectiveness Research: Priorities, Gaps, and Next Steps. Health Affairs,29(10), 1768-76.
  2. Brookhart, M.A., Rassen, J.A., & Schneeweiss, S. (2010). Instrumental variable methods in comparative safety and effectiveness research. Pharmacoepidemiology and Drug Safety, 19(6), 537-54. doi:10.1002/pds.1908
  3. Brooks, J.M., & Chrischilles, E.A. (2007). Heterogeneity and the interpretation of treatment effect estimates from risk adjustment and instrumental variable methods. Medical Care, 45(10 Supl 2), S123-30. doi:10.1097/MLR.0b013e318070c069
  4. Chandra, A., Jena, A.B., & and Skinner, J.S. (2011). The Pragmatist’s Guide to Comparative Effectiveness Research. Journal of Economic Perspectives,25(2), 27–46.
  5. Concato, J., Shah, N., Horwitz, R.I. (2000). Randomized, controlled trials, observational studies, and the hierarchy of research designs. New England Journal of Medicine, 342(25), 1887-92.
  6. Cox, E., Martin, B.C., Van Staa, T., Garbe, E., Siebert, U., & Johnson, M.L (2009). Good research practices for comparative effectiveness research: Approaches to mitigate bias and confounding in the design of nonrandomized studies of treatment effects using secondary data sources: The international society for pharmacoeconomics and outcomes research good research practices for retrospective database analysis task force report? part II. Value in Health, 12(8), 1053-61. doi:10.1111/j.1524-4733.2009.00601.x
  7. Helfand, M., Tunis, S., Whitlock, E.P., Pauker, S.G., Basu, A., Chilingerian, J., Harrell Jr., F.E., Meltzer, D.O., Montori, V.M., Shepard, D.S., Kent, D.M. and The Methods Work Group of the National CTSA Strategic Goal Committee on Comparative Effectiveness Research (2011). A CTSA agenda to advance methods for comparative effectiveness research. Clinical and Translational Science, 4(3), 188-98. doi:10.1111/j.1752-8062.2011.00282.x
  8. Johnson, M.L., Crown, W., Martin, B.C., Dormuth, C.R., & Siebert, U (2009). Good research practices for comparative effectiveness research: Analytic methods to improve causal inference from nonrandomized studies of treatment effects using secondary data sources: The ISPOR good research practices for retrospective database analysis task force report–part III. Value in Health: The Journal of the International Society for Pharmacoeconomics and Outcomes Research, 12(8), 1062-73. doi:10.1111/j.1524-4733.2009.00602.x
  9. Lohr, K. (2007). Emerging methods in comparative effectiveness and safety: Symposium overview and summary. Medical Care, 45(10), S5-S8. doi:10.1097/MLR.0b013e31812714b6
  10. Rosenbaum, P.R., & Rubin, D.B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1), 41-55. doi:10.1093/biomet/70.1.41
  11. Schneeweiss, S (2006). Sensitivity analysis and external adjustment for unmeasured confounders in epidemiologic database studies of therapeutics. Pharmacoepidemiology and Drug Safety, 15(5), 291-303. doi:10.1002/pds.1200
  12. Seeger, J.D., Kurth, T., & Walker, A.M. (2007). Use of propensity score technique to account for exposure-related covariates: An example and lesson.Medical Care, 45(10 Supl 2), S143-8. doi:10.1097/MLR.0b013e318074ce79
  13. Stukel, T.A., Fisher, E.S., Wennberg, D.E., Alter, D.A., Gottlieb, D.J., & Vermeulen, M.J. (2007). Analysis of observational studies in the presence of treatment selection bias. JAMA: The Journal of the American Medical Association, 297(3), 278-85. doi:10.1001/jama.297.3.278
  14. Stürmer, T., Glynn, R.J., Rothman, K.J., Avorn, J., & Schneeweiss, S. (2007). Adjustments for unmeasured confounders in pharmacoepidemiologic database studies using external information. Medical Care, 45(10 Supl 2), S158-65. doi:10.1097/MLR.0b013e318070c045
  15. Stürmer, T., Rothman, K.J., & Glynn, R.J. (2006). Insights into different results from different causal contrasts in the presence of effect-measure modification.Pharmacoepidemiology and Drug Safety, 15(10), 698-709. doi:10.1002/pds.1231
  16. Stürmer, T., Schneeweiss, S., Rothman, K.J., Avorn, J., & Glynn, R.J. (2007). Performance of propensity score Calibration—A simulation study. American Journal of Epidemiology, 165(10), 1110-8. doi:10.1093/aje/kwm074