How effective is fast and automated feedback to examiners in tackling the size of marking errors?
Published version
Peer-reviewed
Repository URI
Repository DOI
Type
Change log
Authors
Abstract
Reliability is important in national assessment systems. Therefore, there is a good deal of research about examiners' marking reliability. However, some questions remain unanswered due to the changing context of e-marking, particularly the opportunity for fast and automated feedback to examiners on their marking. Some of these questions are:
- will iterative feedback result in greater marking accuracy than only one feedback session?
- will encouraging examiners to be consistent (rather than more accurate) result in greater marking accuracy?
- will encouraging examiners to be more accurate (rather than more consistent) result in greater marking accuracy?
Thirty three examiners were matched into four experimental groups based on severity of their marking. All examiners marked the same 100 candidate responses, in the same short time scale. Group 1 received one session of feedback about their accuracy. Group 2 received three iterative sessions of feedback about the accuracy of their marking. Group 3 received one session of feedback about their consistency. Group 4 received three iterative sessions of feedback about the consistency of their marking. Absolute differences between examiners' marking and a reference mark were analysed using a general linear model. The results of the present analysis pointed towards the answer to all the research questions being "no". The results presented in this article are not intended to be used to evaluate current marking practices. Rather the article is intended to contribute to answering the research questions, and developing an evidence base for the principles that should be used to design and improve marking practices.