Repository logo
 

Variation in passing standards for graduation-level knowledge items at UK medical schools.

Accepted version
Peer-reviewed

Type

Article

Change log

Authors

Taylor, Celia A 
Melville, Colin R 
Kluth, David C 
Johnson, Neil 

Abstract

OBJECTIVES: Given the absence of a common passing standard for students at UK medical schools, this paper compares independently set standards for common 'one from five' single-best-answer (multiple-choice) items used in graduation-level applied knowledge examinations and explores potential reasons for any differences. METHODS: A repeated cross-sectional study was conducted. Participating schools were sent a common set of graduation-level items (55 in 2013-2014; 60 in 2014-2015). Items were selected against a blueprint and subjected to a quality review process. Each school employed its own standard-setting process for the common items. The primary outcome was the passing standard for the common items by each medical school set using the Angoff or Ebel methods. RESULTS: Of 31 invited medical schools, 22 participated in 2013-2014 (71%) and 30 (97%) in 2014-2015. Schools used a mean of 49 and 53 common items in 2013-2014 and 2014-2015, respectively, representing around one-third of the items in the examinations in which they were embedded. Data from 19 (61%) and 26 (84%) schools, respectively, met the inclusion criteria for comparison of standards. There were statistically significant differences in the passing standards set by schools in both years (effect sizes (f2 ): 0.041 in 2013-2014 and 0.218 in 2014-2015; both p < 0.001). The interquartile range of standards was 5.7 percentage points in 2013-2014 and 6.5 percentage points in 2014-2015. There was a positive correlation between the relative standards set by schools in the 2 years (Pearson's r = 0.57, n = 18, p = 0.014). Time allowed per item, method of standard setting and timing of examination in the curriculum did not have a statistically significant impact on standards. CONCLUSIONS: Independently set standards for common single-best-answer items used in graduation-level examinations vary across UK medical schools. Further work to examine standard-setting processes in more detail is needed to help explain this variability and develop methods to reduce it.

Description

Keywords

Clinical Competence, Cross-Sectional Studies, Curriculum, Education, Medical, Undergraduate, Educational Measurement, Humans, Professional Competence, Reference Standards, Schools, Medical, Students, Medical, United Kingdom

Journal Title

Med Educ

Conference Name

Journal ISSN

0308-0110
1365-2923

Volume Title

51

Publisher

Wiley