Repository logo

Research Matters 08


Recent Submissions

Now showing 1 - 8 of 8
  • ItemOpen AccessPublished version Peer-reviewed
    Research Matters 8: June 2009
    (Research Division, Cambridge University Press & Assessment, 2008-06-01) Green, Sylvia
    Research Matters is a free biannual publication which allows Cambridge University Press & Assessment to share its assessment research, in a range of fields, with the wider assessment community. 
  • ItemOpen AccessPublished version Peer-reviewed
    Thinking about making the right mark: Using cognitive strategy research to explore examiner training
    (Research Division, Cambridge University Press & Assessment, 2008-06-01) Suto, Irenka; Greatorex, Jackie; Nadas, Rita
    In this article, we draw together research on examiner training and on the nature of the judgements entailed in the marking process. We report new analyses of data from two recent empirical studies, Greatorex and Bell (2008) and Suto and Nadas (2008a), exploring possible relationships between the efficacy of training and the complexity of the cognitive marking strategies apparently needed to mark the examination questions under consideration. In the first study reported in this article, we considered the benefits of three different training procedures for experienced examiners marking AS-level biology questions. In the second study reported here, we explored the effects of a single training procedure on experienced and inexperienced (graduate) examiners marking GCSE mathematics and physics questions. In both studies, it was found that: (i) marking accuracy was better after training than beforehand; and (ii) the effect of training on change in marking accuracy varied across all individual questions. Our hypothesis that training would be more beneficial for apparently more complex cognitive marking strategy questions than for apparently simple cognitive marking strategy questions was upheld for both subjects in Study 2, but not in Study 1.
  • ItemOpen AccessPublished version Peer-reviewed
    Mark scheme features associated with different levels of marker agreement
    (Research Division, Cambridge University Press & Assessment, 2008-06-01) Bramley, Tom
    This research looked for features of question papers and mark schemes associated with higher and lower levels of marker agreement at the level of the item rather than the whole paper. First, it aimed to identify relatively coarse features of question papers and mark schemes that could apply across a wide range of subjects and be objectively coded by someone without particular subject expertise or examining experience. It then aimed to discover which features were most strongly related to marker agreement, to discuss any possible implications for question paper (QP) and mark scheme (MS) design, and to relate the findings to the theoretical framework summarised in Suto and Nadas (2007).
  • ItemOpen AccessPublished version Peer-reviewed
    Investigation into whether z-scores are more reliable at estimating missing marks than the current method
    (Research Division, Cambridge University Press & Assessment, 2008-06-01) Bird, Peter
    The awarding bodies in the UK use similar, but slightly different, methodologies for assessing missing marks (i.e., marks for candidates who are absent with good reason). In an attempt to standardise the process across awarding bodies, a z-score method of estimating missing marks (as used by other awarding bodies) was investigated to see if it was better than the current proportional estimation method being used by OCR.
  • ItemOpen AccessPublished version Peer-reviewed
    How effective is fast and automated feedback to examiners in tackling the size of marking errors?
    (Research Division, Cambridge University Press & Assessment, 2008-06-01) Sykes, Elizabeth; Novakovic, Nadezda; Greatorex, Jackie; Bell, John; Nadas, Rita; Gill, Tim
    Reliability is important in national assessment systems. Therefore, there is a good deal of research about examiners' marking reliability. However, some questions remain unanswered due to the changing context of e-marking, particularly the opportunity for fast and automated feedback to examiners on their marking. Some of these questions are: - will iterative feedback result in greater marking accuracy than only one feedback session? - will encouraging examiners to be consistent (rather than more accurate) result in greater marking accuracy? - will encouraging examiners to be more accurate (rather than more consistent) result in greater marking accuracy? Thirty three examiners were matched into four experimental groups based on severity of their marking. All examiners marked the same 100 candidate responses, in the same short time scale. Group 1 received one session of feedback about their accuracy. Group 2 received three iterative sessions of feedback about the accuracy of their marking. Group 3 received one session of feedback about their consistency. Group 4 received three iterative sessions of feedback about the consistency of their marking. Absolute differences between examiners' marking and a reference mark were analysed using a general linear model. The results of the present analysis pointed towards the answer to all the research questions being "no". The results presented in this article are not intended to be used to evaluate current marking practices. Rather the article is intended to contribute to answering the research questions, and developing an evidence base for the principles that should be used to design and improve marking practices.
  • ItemOpen AccessPublished version Peer-reviewed
    Happy Birthday to you'; but not if it's summertime
    (Research Division, Cambridge University Press & Assessment, 2008-06-01) Oates, Tim; Sykes, Elizabeth; Emery, Joanne; Bell, John; Vidal Rodeiro, Carmen
    For years, evidence of a birthdate effect has stared out of qualifications data for the United Kingdom; summer-born children appear to be strongly disadvantaged. Whilst those responsible for working on this data have tried to bring public attention to this issue, it has been neglected by agencies central to education and training policy. Researchers at Cambridge Assessment have had a long interest in the birthdate effect because it is so readily observable in the assessment data with which they have worked. More recently, Cambridge Assessment decided to review the issue with the intention to advance the understanding of the extent and causes of the birthdate effect in the English education system. Although the review focuses on understanding the birthdate effect in England, it uses international comparisons as one means of throwing light on key factors. This article outlines the findings of the review.
  • ItemOpen AccessPublished version Peer-reviewed
    Capturing expert judgement in grading: an examiner's perspective
    (Research Division, Cambridge University Press & Assessment, 2008-06-01) King, Peter; Novakovic, Nadezda; Suto, Irenka
    There exist several methods of capturing expert judgement which have been used, or could potentially be used, in the process of determining grade boundaries for examinations. In a recent study, we sought to explore the judgements entailed in three such methods: (i) rank ordering, (ii) traditional awarding, and (iii) Thurstone pairs. A key aim was to identify the features of candidates' scripts that affect the judgements made in each of the three methods. To achieve this, sixty experienced examiners participated in the study. Each made judgements about overall script quality, using each method on a different batch of scripts. Additionally, each examiner completed a research task in which he or she was asked to rate a fourth batch of scripts for a series of features, using rating scales devised by the researchers. Subsequent data analysis entailed relating the judgemental data on script quality to the script feature data. Immediately after taking part in the study, one examiner recorded and offered the Research Division his views and experiences of participation. His perspective is the focus of this article.
  • ItemOpen AccessPublished version Peer-reviewed
    An investigation into marker reliability and some qualitative aspects of on-screen marking
    (Research Division, Cambridge University Press & Assessment, 2008-06-01) Johnson, Martin; Nadas, Rita
    There is a growing body of research literature that considers how the mode of assessment, either computer- or paper-based, might affect candidates' performances (Paek, 2005). Despite this, there is a fairly narrow literature that shifts the focus of attention to those making assessment judgements and which considers issues of assessor consistency when dealing with extended textual answers in different modes. This study involved 12 examiners marking 90 GCSE English Literature essays on paper and on screen and considered 6 questions: 1. Does mode affect marker reliability? 2. Construct validity - do examiners consider different features of the essays when marking in different modes? 3. Is mental workload greater for marking on screen? 4. Is spatial encoding influenced by mode? 5. Is navigation influenced by mode? 6. Is 'active reading' influenced by mode?