Abstract
As medical education research advances, it is important that education researchers employ rigorous methods for conducting and reporting their investigations. In this article we discuss several important yet oft neglected issues in designing experimental research in education. First, randomization controls for only a subset of possible confounders. Second, the posttest-only design is inherently stronger than the pretest–posttest design, provided the study is randomized and the sample is sufficiently large. Third, demonstrating the superiority of an educational intervention in comparison to no intervention does little to advance the art and science of education. Fourth, comparisons involving multifactorial interventions are hopelessly confounded, have limited application to new settings, and do little to advance our understanding of education. Fifth, single-group pretest–posttest studies are susceptible to numerous validity threats. Finally, educational interventions (including the comparison group) must be described in detail sufficient to allow replication.
Similar content being viewed by others
Notes
For a recent discussion of whether medical education is a hard or soft science, see Gruppen (2008).
Randomization cannot control for mortality (loss to follow-up), but it can facilitate analyses seeking to explore the implications of high participant dropout.
When pretests are used, researchers should not calculate the difference between pretest and posttest scores and statistically analyze the difference or change scores. Although this method is commonly used (indeed, we are guilty of having used it), it is inferior to the more appropriate use of the pretest as a covariate (along with treatment group and other relevant variables) in multivariate statistical models. See Cronbach and Furby (1970) and Norman and Streiner (2007) for detailed discussions.
Pretests may also be useful in randomized trials comparing active interventions if no treatment effect is found, by providing evidence that the lack of effect is not due to similarly ineffective interventions or an insensitive measurement tool (an exploration of the absolute effects of the treatments rather than the relative effects between groups). However, this analysis parallels the single-group pretest-posttest study with all attendant limitations.
References
Baernstein, A., Liss, H. K., Carney, P. A., & Elmore, J. G. (2007). Trends in study methods used in Undergraduate Medical Education Research, 1969–2007. Journal of the American Medical Association, 298, 1038–1045.
Beckman, T. J., & Cook, D. A. (2004). Educational epidemiology. Journal of the American Medical Association, 292, 2969.
Benson, K., & Hartz, A. J. (2000). A comparison of observational studies and randomized, controlled trials. New England Journal of Medicine, 342, 1878–1886.
Bland, J. M., & Altman, D. G. (1994). Statistic notes: Regression towards the mean. British Medical Journal, 308, 1499.
Bordage, G. (2007). Moving the field forward: Going beyond quantitative–qualitative. Academic Medicine, 82(10 suppl), S126–S128.
Callahan, C. A., Hojat, M., & Gonnella, J. S. (2007). Volunteer bias in medical education research: An empirical study of over three decades of longitudinal data. Medical Education, 41, 746–753.
Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. Chicago: Rand McNally.
Carney, P. A., Nierenberg, D. W., Pipas, C. F., Brooks, W. B., Stukel, T. A., & Keller, A. M. (2004). Educational epidemiology: Applying population-based design and analytic approaches to study medical education. Journal of the American Medical Association, 292, 1044–1050.
Concato, J., Shah, N., & Horwitz, R. I. (2000). Randomized, controlled trials, observational studies, and the hierarchy of research designs. New England Journal of Medicine, 342, 1887–1892.
Cook, D. A. (2005). The research we still are not doing: An agenda for the study of computer-based learning. Academic Medicine, 80, 541–548.
Cook, D. A., Beckman, T. J., & Bordage, G. (2007). Quality of reporting of experimental studies in medical education: A systematic review. Medical Education, 41, 737–745.
Cook, D. A., Bordage, G., & Schmidt, H. G. (2008). Description, justification, and clarification: A framework for classifying the purposes of research in medical education. Medical Education, 42, 128–133.
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Boston: Houghton Mifflin.
Cook, D. A., Thompson, W. G., Thomas, K. G., Thomas, M. R., & Pankratz, V. S. (2006). Impact of self-assessment questions and learning styles in web-based learning: A randomized, controlled, crossover trial. Academic Medicine, 81, 231–238.
Cronbach, L. J. (1982). Designing evaluations of educational and social problems. San Francisco: Jossey-Bass.
Cronbach, L. J., & Furby, L. (1970). How should we measure “change”—or should we? Psychological Bulletin, 74, 68–80.
Dauphinee, W. D., & Wood-Dauphinee, S. (2004). The need for evidence in medical education: The development of best evidence medical education as an opportunity to inform, guide, and sustain medical education research. Academic Medicine, 79, 925–930.
Des Jarlais, D. C., Lyles, C., & Crepaz, N. (2004). Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: The TREND statement. American Journal of Public Health, 94, 361–366.
Education Group for Guidelines on Evaluation. (1999). Guidelines for evaluating papers on educational interventions. British Medical Journal, 318, 1265–1267.
Fraenkel, J. R., & Wallen, N. E. (2003). How to design and evaluate research in education. New York, NY: McGraw-Hill.
Gruppen, L. D. (2008). Is medical education research ‘hard’ or ‘soft’ research? Advances in Health Sciences Education: Theory and Practice, 13, 1–2.
Harden, R. M., Grant, J., Buckley, G., & Hart, I. R. (1999). BEME Guide No. 1: Best evidence medical education. Medical Teacher, 21, 553–562.
Harris, I. (2003). What does “the discovery of grounded theory” have to say to medical education? Advances in Health Sciences Education, 8, 49–61.
Hutchinson, L. (1999). Evaluating and researching the effectiveness of educational interventions. British Medical Journal, 318, 1267–1269.
Kennedy, T. J., & Lingard, L. A. (2007). Questioning competence: A discourse analysis of attending physicians’ use of questions to assess trainee competence. Academic Medicine, 82(10 suppl), S12–S15.
Kirkpatrick, D. (1996). Revisiting Kirkpatrick’s four-level model. Training and Development, 50(1), 54–59.
Norman, G. (2003). RCT = results confounded and trivial: The perils of grand educational experiments. Medical Education, 37, 582–584.
Norman, G. R., & Streiner, D. L. (2007). Biostatistics: The bare essentials (Vol. 3). Hamilton: BC Decker.
Papadakis, M. A., Teherani, A., Banach, M. A., Knettler, T. R., Rattner, S. L., Stern, D. T., et al. (2005). Disciplinary action by medical boards and prior behavior in medical school. New England Journal of Medicine, 353, 2673–2682.
Price, E. G., Beach, M. C., Gary, T. L., Robinson, K. A., Gozu, A., Palacio, A., et al. (2005). A systematic review of the methodological rigor of studies evaluating cultural competence training of health professionals. Academic Medicine, 80, 578–586.
Shea, J. A., Arnold, L., & Mann, K. V. (2004). A RIME perspective on the quality and relevance of current and future medical education research. Academic Medicine, 79, 931–938.
Tamblyn, R., Abrahamowicz, M., Dauphinee, D., Wenghofer, E., Jacques, A., Klass, D., et al. (2007). Physician scores on a national clinical skills examination as predictors of complaints to medical regulatory authorities. Journal of the American Medical Association, 298, 993–1001.
Wilson, D. B., & Lipsey, M. W. (2001). The role of method in treatment effectiveness research: Evidence from meta-analysis. Psychological Methods, 6, 413–429.
Woods, N. N., Brooks, L. R., & Norman, G. R. (2005). The value of basic science in clinical diagnosis: Creating coherence among signs and symptoms. Medical Education, 39, 107–112.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Cook, D.A., Beckman, T.J. Reflections on experimental research in medical education. Adv in Health Sci Educ 15, 455–464 (2010). https://doi.org/10.1007/s10459-008-9117-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10459-008-9117-3