2021, Number 40
<< Back Next >>
Inv Ed Med 2021; 10 (40)
Item analysis of multiple choice question tests in undergraduate health programs of Universidad Mayor
Giaconi E, Bazána ME, Castillo M, Hurtado A, Rojas H, Giaconi V, Guiraldes E
Language: Spanish
References: 35
Page: 61-69
PDF size: 483.27 Kb.
ABSTRACT
Introduction: Multiple-choice question tests (MCTs) are
widely used to assess undergraduate students’ learning
at the Health Sciences Schools (HSS) of our Universidad
Mayor. In the last decade, the process of checking MCQs
has been accomplished by optical mark recognition. This
fact allowed this research study to obtain a diagnostic
vision of the quality of this process.
Objective: To analyze the difficulty, discrimination, reliability
and distractors of MCTs applied in seven HSS of
Universidad Mayor during the 2013-2017 period.
Method: For this quantitative, descriptive, non-experimental,
cross-sectional, and retrospective study, of the
population of tests under study, i.e. 2640 MCTs, 337 were
randomly selected by stratified probabilistic sampling
with systematic selection. Psychometric indicators were
estimated from the framework of Classical Test Theory.
ANOVA tests were used to compare between programs.
Results: For item difficulty, item discrimination, and reliability
coefficient, the respective means were: 68%, 0.23
and 0.50. Only for item difficulty and discrimination were
there significant differences among the participating HSS.
Regarding distractors, on average, 1,51 distractors were
found to be functional (1,52 for items with four choices
and 1,49 for items with five choices).
Conclusions: These results reveal that there is considerable
room for improvement in the application of MCTs in
the assessment of students‘ learning in our HSS. An indepth
reflection between faculty and university authorities
should be carried out to ensure the future validity inferences
and quality of MCTs.
REFERENCES
Scallon G. L’évaluation des apprentissages dans une approche par compétences. Bruselas: De Boeck Université; 2004. Pruebas opción múltiple carreras salud Universidad Mayor
Epstein RM. Assessment in medical education. N Engl J Med. 2007;356(4):387-96.
Paniagua M, Swygert K. Constructing written test questions for the basic and clinical sciences. Philadelphia: National Board of Medical Examiners (US); 2016.
Wood T, Cole G, Lee C. Developing multiple choice questions for the RCPSC certification examinations. Ottawa Canada: Royal College of Physicians and Surgeons Canada; 2011.
Tarrant M, Ware J, Mohammed AM. An assessment of functioning and non-functioning distractors in multiple-choice questions: a descriptive analysis. BMC Med Educ. 2009;9(1):1-8.
Tarrant M, Ware J. A comparison of the psychometric properties of three-and four-option multiple-choice questions in nursing assessments. Nurse Educ Today. 2010;30(6):539-43.
Tavakol M, Dennick R. Post-examination analysis of objective tests. Med Teach. 2011;33(6):447-58.
Violato EM, Violato C. Multiple choice questions (MCQs) in a nutshell: Theory, practice, and post-exam item analysis. Acad Med. 2019;95(4):659.
McCoubrie P. Improving the fairness of multiple-choice questions: a literature review. Med Teach. 2004;26(8):709-12.
Haladyna TM, Downing SM, Rodriguez MC. A review of multiple-choice item-writing guidelines for classroom assessment. Appl Meas Educ. 2002;15(3):309-33.
Coughlin PA, Featherstone CR. How to write a high quality multiple choice question (MCQ): A guide for clinicians. Eur J Vasc Endovasc Surg. 2017;54(5):654-8.
Brame CJ. Writing good multiple choice test questions. [Internet] Vanderbuilt Univ Cent Teach; 2013 [citado 2021 Marzo 11] Disponible en: https://bit.ly/3hYAObW
Jurado-Núñez A, Flores-Hernández F, Delgado-Maldonado L, Sommer-Cervantes H, Martínez-González A, Sánchez- Mendiola M. Distractores en preguntas de opción múltiple para estudiantes de medicina: ¿cuál es su comportamiento en un examen sumativo de altas consecuencias? Investig en Educ Médica. 2013;2(8):202-10.
Shultz KS, Whitney DJ, Zickar MJ. Measurement theory in action: Case studies and exercises. 2nd. ed. New York: Routledge; 2013.
American Educational Research Association, American Psychological Association, National Council on Measurement in Education. Estándares para Pruebas Educativas y Psicológicas (Original work published 2014). Lieve M, translator. Washington, DC: American Educational Research; 2018.
Abozaid H, Park YS, Tekian A. Peer review improves psychometric characteristics of multiple choice questions. Med Teach. 2017;39(sup1):S50-S54.
Mehta G, Mokhasi V. Item analysis of multiple choice questions- an assessment of the assessment tool. Int J Heal Sci Res. 2014;4(7):197-202.
Mukherjee P, Lahiri SK. Analysis of multiple choice questions (MCQs): Item and test statistics from an assessment in a medical college of Kolkata, West Bengal. IOSR J Dent Med Sci. 2015;1:47-52.
Taib F, Yusoff MSB. Difficulty index, discrimination index, sensitivity and specificity of long case and multiple choice questions to predict medical students’ examination performance. J Taibah Univ Med Sci. 2014;9(2):110-114.
Hingorjo MR, Jaleel F. Analysis of one-best MCQs: the difficulty index, discrimination index and distractor efficiency. JPMA-Journal Pakistan Med Assoc. 2012;62(2):142-7.
Pérez Tapia JH, Acuña Aguilar N, Arratia Cuela ER. Nivel de dificultad y poder de discriminación del tercer y quinto examen parcial de la cátedra de cito-histología 2007 de la carrera de medicina de la UMSA. Cuad Hosp Clínicas. 2008;53(2):16-22.
Banta TW, Palomba CA. Assessment essentials: Planning, implementing, and improving assessment in higher education. San Francisco, CA: John Wiley & Sons; 2014.
Tangianu F, Mazzone A, Berti F, Pinna G, Bortolotti I, Colombo F, et al. Are multiple-choice questions a good tool for the assessment of clinical competence in Internal Medicine? Ital J Med. 2018;12(2):88-96.
Aubin A-S, Young M, Eva K, St-Onge C. Examinee cohort size and item analysis guidelines for health professions education programs: A Monte Carlo simulation study. Acad Med. 2020;95(1):151-6.
Pugh D, De Champlain A, Gierl M, Lai H, Touchie C. Using cognitive models to develop quality multiple-choice questions. Med Teach. 2016;38(8):838-43.
Dory V, Allan K, Birnbaum L, Lubarsky S, Pickering J, Young M. Ensuring the quality of multiple-choice tests: An algorithm to facilitate decision making for difficult questions. Acad Med. 2019;94(5):740.
Meneses J, Barrios M, Bonillo A, Cosculluela A, Lozano LM, Turbany J, Valero S. Psicometría. Barcelona: UOC; 2014.
Hasty BN, Lau JN, Tekian A, Miller SE, Shipper ES, Merrell SB, et al. Validity evidence for a knowledge assessment tool for a mastery learning scrub training curriculum. Acad Med. 2020;95(1):129-35.
Kilgour JM, Tayyaba S. An investigation into the optimal number of distractors in single-best answer exams. Adv Heal Sci Educ. 2016;21(3):571-85.
Testa S, Toscano A, Rosato R. Distractor efficiency in an item pool for a statistics classroom exam: assessing its relation with item cognitive level classified according to Bloom’s taxonomy. Front Psychol. 2018;9:1-12.
Tavakol, M, Dennick, R. Making sense of Cronbach’s alpha. International journal of medical education. 2011;2:53.
Haladyna TM, Rodriguez MC, Stevens C. Are multiplechoice items too fat? Appl Meas Educ. 2019;32(4):350-64.
McCarty T. How to Build Assessments for Clinical Learners. En: Weiss Roberts L , editor. Roberts Academic Medicine Handbook. 2nd. Ed. Cham: Springer; 2020. p. 83-90.
Aguayo-Albasini JL, Atucha N. and García-Estañ J. Las unidades de educación médica en las facultades de Medicina y de Ciencias de la Salud en España, ¿son necesarias? Educación Médica. 2021;22:48-54.
World Federation for Medical Education (WFME). Basic medical education WFME global standards for quality improvement, The 2020 Revision. [Internet]. [Consultado 01 Jul 2021]. Disponible en: https://bit.ly/3hZf2EV