Item Published version Open AccessResearch Matters 26: Autumn 2018(Research Division, Cambridge University Press & Assessment, 2018-10-01) Bramley, TomResearch Matters is a free biannual publication which allows Cambridge University Press & Assessment to share its assessment research, in a range of fields, with the wider assessment community. Item Published version Open AccessTo "Click" or to "Choose"? Investigating the language used in on-screen assessment(Research Division, Cambridge University Press & Assessment, 2018-10-01) Khan, Rushda; Shaw, StuartIn this article we consider the extent to which the language used in on-screen examination questions ought to differ from that of paper-based exam questions. We argue that the assessment language in screen-based questions should be independent of the mode of delivery and should focus on relevant and expected test-taker cognitive processing required by the task rather than on the format of the response. We contend that "medium-independent" language improves how well a question will measure the knowledge, understanding and/or skills of interest by allowing learners to focus on its content rather than on extraneous, potentially contaminating factors such as technological literacy and mode familiarity. The latter factors may constitute potential sources of construct-irrelevant variance and, therefore, pose a threat to how scores awarded to a performance on a question are both interpreted and used. To illustrate the arguments, examples from the Cambridge online Progression Tests are used. Item Published version Open AccessIs comparative judgement just a quick form of multiple marking?(Research Division, Cambridge University Press & Assessment, 2018-10-01) Benton, Tom; Gallacher, TomThis article describes analysis of GCSE English essays that have both been scored using comparative judgement and marked multiple times. The different methods of scoring are compared in terms of the accuracy with which the resulting scores can predict achievement on a separate set of assessments. This results show that the predictive value of marking increases if multiple marking is used and (perhaps more interestingly) if statistical scaling is applied to the marks. More importantly, the evidence in this article suggests that any advantage of comparative judgement over traditional marking can be explained in terms of the number of judgements that are made for each essay and by the use of a complex statistical model to combine these. In other words, it is the quantity of data that is collected about each essay and how this data is analysed that is important. The physical act of placing two essays next to each other and deciding which is better does not appear to produce judgements that are in themselves any more valid than from getting the same individual to simply mark a set of essays. Item Published version Open AccessHow have students and schools performed on the Progress 8 performance measure?(Research Division, Cambridge University Press & Assessment, 2018-10-01) Gill, TimThe new league table measures (Attainment 8 and Progress 8) are based on performance in a student's best eight subjects at GCSE (or equivalent). One criticism of the previous measures was that they penalised schools with a low-attaining intake. As Progress 8 is a value-added measure, it already accounts for the prior attainment of the student and should in theory no longer penalise these schools. The purpose of this research was to delve deeper into the relationship between Progress 8 scores and various student and school level factors. In particular, multilevel regression modelling was undertaken to infer which factors were most important in determining scores at student level. The results showed that various groups of students were predicted higher Progress 8 scores including girls, less deprived students, students without SEN and students in schools with a higher performing intake. At the school level, higher Progress 8 scores were found amongst schools with higher-performing intakes. This suggests that one of the main aims of the new measures (levelling the playing field) has not been completely achieved. Item Published version Open AccessCharacteristics, uses and rationales of mark-based and grade-based assessment(Research Division, Cambridge University Press & Assessment, 2018-10-01) Williamson, JoannaMark-based assessment requires assessors to assign numerical marks to candidates' work, assisted by a mark scheme. In grade-based approaches, assessors evaluate candidates' work against grading criteria to decide upon a grade, avoiding marks altogether. This article outlines the characteristics, uses and rationales of the two approaches, focusing particularly on their suitability for assessment in vocational and technical qualifications. Item Published version Open AccessArticulation Work: How do senior examiners construct feedback to encourage both examiner alignment and examiner development?(Research Division, Cambridge University Press & Assessment, 2018-10-01) Johnson, MartinThis is a study of the marking feedback given to a group of examiners by their Team Leaders (more senior examiners who oversee and monitor the quality of examiner marking in their team). This feedback has an important quality assurance function but also has a developmental dimension, allowing less senior examiners to gain insights into the thinking of more senior examiners. When looked at from this perspective, marking feedback supports a form of examiner professional learning. This study set out to look at this area of examiner practice in detail. To do this, I captured and analysed a set of feedback interactions involving 30 examiners across three Advanced level General Certificate of Education subjects. For my analysis, I used a mixture of learning theory and sociological theory to explore how the feedback was being used and how it attained its dual goals of examiner monitoring and examiner development.