Repository logo
 

Research Matters 04

Browse

Recent Submissions

Now showing 1 - 9 of 9
  • ItemOpen AccessPublished version Peer-reviewed
    Research Matters 4: June 2007
    (Research Division, Cambridge University Press & Assessment, 2007-06-01) Green, Sylvia
    Research Matters is a free biannual publication which allows Cambridge University Press & Assessment to share its assessment research, in a range of fields, with the wider assessment community. 
  • ItemOpen AccessPublished version Peer-reviewed
    The ‘Marking Expertise’ projects: Empirical investigations of some popular assumptions
    (Research Division, Cambridge University Press & Assessment, 2007-06-01) Suto, Irenka; Nadas, Rita
    Recent transformations in professional marking practice, including moves to mark some examination papers on screen, have raised important questions about the demands and expertise that the marking process entails. What makes some questions harder to mark accurately than others, and how much does marking accuracy vary among individuals with different backgrounds and experiences? We are conducting a series of interrelated studies, exploring variations in accuracy and expertise in GCSE examination marking. In our first two linked studies, collectively known as Marking Expertise Project 1, we investigated marking on selected GCSE maths and physics questions from OCR's June 2005 examination papers. Our next two linked studies, which comprise Marking Expertise Project 2, are currently underway and involve both CIE and OCR examinations. This time we are focussing on International (I) GCSE biology questions from November 2005 and GCSE business studies questions from June 2006. All four studies sit within a conceptual framework in which we have proposed a number of factors that might contribute to accurate marking. For any particular GCSE examination question, accuracy can be maximised through increasing the marker's personal expertise and/or through decreasing the demands of the marking task, and most relevant factors can be grouped according to which of these two routes they contribute to. In this article, we present a summary of some key aspects and findings of the two studies comprising our first project. We end by looking ahead to our second project on marking expertise, which is currently in progress.
  • ItemOpen AccessPublished version Peer-reviewed
    Researching the judgement processes involved in A-level marking
    (Research Division, Cambridge University Press & Assessment, 2007-06-01) Crisp, Vicki
    The marking of examination scripts by examiners is a key part of the assessment process in many assessment systems. Despite this, there has been relatively little work to investigate the process of marking at a cognitive and socially-framed level. Improved understanding of the judgement processes underlying current assessment systems would also leave us better prepared to anticipate the likely effects of various innovations in examining systems such as moves to on-screen marking. An AS level and an A2 level geography exam paper were selected. Six experienced examiners who usually mark at least one of the two papers participated in the research. Examiners marked fifty scripts from each exam at home with the marking of the first ten scripts for each reviewed by the relevant Principal Examiner. This reflected normal marking procedures as far as possible. Examiners later came to meetings individually where they marked four or five scripts in silence and four to six scripts whilst thinking aloud for each exam, and were also interviewed. The findings of this research support the view that assessment involves processes of actively constructing meaning from texts as well as involving cognitive processes. The idea of examining as a practice that occurs within a social framework is supported by the evidence of some social, personal and affective responses. Aspects of markers' social histories as examiners and teachers were evident in the comparisons that they made and perhaps more implicitly in their evaluations. The overlap of these findings with aspects of various previous findings helps to validate both current and previous research, thus aiding the continued development of an improved understanding of the judgement processes involved in marking.
  • ItemOpen AccessPublished version Peer-reviewed
    Quantifying marker agreement: terminology, statistics and issues
    (Research Division, Cambridge University Press & Assessment, 2007-06-01) Bramley, Tom
    One challenge facing assessment agencies is in choosing the appropriate statistical indicators of marker agreement for communicating to different audiences. This task is not made easier by the wide variety of terminology in use, and differences in how the same terms are sometimes used. The purpose of this article is to provide a brief overview of: i) the different terminology used to describe indicators of marker agreement; ii) some of the different statistics which are used and; iii) the issues involved in choosing an appropriate indicator and its associated statistic. It is hoped that this will clarify some ambiguities which are often encountered, and contribute to a more consistent approach in reporting research in this area.
  • ItemOpen AccessPublished version Peer-reviewed
    Quality control of examination marking
    (Research Division, Cambridge University Press & Assessment, 2007-06-01) Bell, John; Bramley, Tom; Claessen, Mark; Raikes, Nick
    As markers trade their pens for computers, new opportunities for monitoring and controlling marking quality are created. Item-level marks may be collected and analysed throughout marking. The results can be used to alert marking supervisors to possible quality issues earlier than is currently possible, enabling investigations and interventions to be made in a more timely and efficient way. Such a quality control system requires a mathematical model that is robust enough to provide useful information with initially relatively sparse data, yet simple enough to be easily understood, easily implemented in software and computationally efficient - this last is important given the very large numbers of candidates assessed by Cambridge Assessment and the need for rapid analysis during marking. In the present article we describe the models we have considered and give the results of an investigation into their utility using simulated data.
  • ItemOpen AccessPublished version Peer-reviewed
    Item level examiner agreement
    (Research Division, Cambridge University Press & Assessment, 2007-06-01) Raikes, Nick; Massey, Alf
    Studies of inter-examiner reliability in GCSE and A-level examinations have been reported in the literature, but typically these focused on paper totals, rather than item marks. See, for example, Newton (1996). Advances in technology, however, mean that increasingly candidates' scripts are being split by item for marking, and the item-level marks are routinely collected. In these circumstances there is increased interest in investigating the extent to which different examiners agree at item level, and the extent to which this varies according to the nature of the item. Here we report and comment on intraclass correlations between examiners marking sample items taken from GCE A-level and IGCSE examinations in a range of subjects. The article is based on a paper presented at the 2006 Annual Conference of the British Educational Research Association.
  • ItemOpen AccessPublished version Peer-reviewed
    Fostering communities of practice in examining
    (Research Division, Cambridge University Press & Assessment, 2007-06-01) Watts, Andrew
    This is a shortened version of a paper given at the International Association for Educational Assessment (IAEA) conference in May 2006.
  • ItemOpen AccessPublished version Peer-reviewed
    Did examiners' marking strategies change as they marked more scripts?
    (Research Division, Cambridge University Press & Assessment, 2007-06-01) Greatorex, Jackie
    Prior research used cognitive psychological theories to predict that examiners might begin marking a question using particular cognitive strategies but later in the marking session they might use different cognitive strategies. Specifically, it was predicted that when examiners are familiar with the question paper, mark scheme and candidates' responses they: - use less 'evaluating' and 'scrutinising' - more 'matching' This research tests these predictions. All Principal Examiners (n=5), Team Leaders (n=5) and Assistant Examiners (n=59) who marked in the winter 2005 session were sent a questionnaire. The questionnaire asked about different occasions in the marking session. It was found that sometimes examiners' marking strategies changed as the examiners marked more scripts. When there were considerable changes in cognitive strategies these were mostly in the predicted direction.
  • ItemOpen AccessPublished version Peer-reviewed
    Agreement between outcomes from different double marking models
    (Research Division, Cambridge University Press & Assessment, 2007-06-01) Vidal Rodeiro, Carmen
    In the context of marking examinations, double marking is a means to enhance reliability. However, deciding if it is worthwhile incorporates a dilemma. Intuitively, it increases the reliability of the assessment and shows fairness in marking, but it needs to be proven a benefit in order to justify the additional time and effort that it takes. One factor which affects the re-marking is whether or not the second marker is aware of the marks awarded by the first marker. Higher agreement is observed between two examiners when the second knows how, and perhaps why, the first marked an exam. This may suggest that the second examiner took advantage of the annotations available when trying to judge the best mark for a candidate. An alternative perspective may suggest that the second examiner was influenced by the first examiner's marks. The purpose of this research is to evaluate the extent to which examiners agree when using different double marking models, in particular, blind and annotated double marking. The impact of examiner experience is also investigated.