Research Matters 07
Permanent URI for this collection
Browse
Recent Submissions
Item Open Access Published version Peer-reviewed Research Matters 7: January 2009(Research Division, Cambridge University Press & Assessment, 2009-01-01) Green, SylviaResearch Matters is a free biannual publication which allows Cambridge University Press & Assessment to share its assessment research, in a range of fields, with the wider assessment community.Item Open Access Published version Peer-reviewed Using ‘thinking aloud’ to investigate judgements about A-level standards: Does verbalising thoughts result in different decisions?(Research Division, Cambridge University Press & Assessment, 2009-01-01) Greatorex, Jackie; Nadas, RitaThe 'think aloud' method entails people verbalising their thoughts while they do tasks, resulting in 'verbal protocols'. The verbal protocols are analysed by researchers to identify the cognitive strategies and processes as well as the factors that affect decision making. Verbal protocols have been widely used to study decisions in educational assessment. The main methodological concern about using verbal protocols is whether thinking aloud compromises ecological validity (the authenticity of the thought processes) and thus the decision outcomes. Researchers have investigated to what extent verbalising affected the thinking processes under investigation in a variety of settings. Currently, the research literature generally is inconclusive; most results show just longer performance times and no alternative task outcome. Previous research on marking collected decision outcomes from two conditions: 1. marking silently; 2. marking whilst thinking aloud. The mark to re-mark differences were the same in the two conditions. However, it is important to confirm whether verbalising affects decisions about grading standards. Therefore, our main aim was to compare the outcomes of senior examiners making decisions about grading standards silently as opposed to whilst thinking aloud. Our article draws from a wider project taking three approaches to grading. In experimental conditions, senior examiners made decisions about A-level grading standards for a science examination both silently and whilst thinking aloud. Three approaches to grading were used in the experiment. All scripts included in the research had achieved a grade A or B in the live examination. The decisions from the silent and verbalising conditions were statistically compared. Our interim findings suggest that verbalising made little difference to the participants' decisions; this is in line with previous research in other contexts. The findings reassure us that the verbal protocols are a useful method for research about decision making in both marking and grading.Item Open Access Published version Peer-reviewed Grading examinations using expert judgements from a diverse pool of judges(Research Division, Cambridge University Press & Assessment, 2009-01-01) Raikes, Nick; Scorey, Sara; Shiell, HannahIn normal procedures for grading GCE Advanced level and GCSE examinations, an Awarding Committee of senior examiners recommends grade boundary marks based on their judgement of the quality of scripts, informed by technical and statistical evidence. The aim of our research was to investigate whether an adapted Thurstone Pairs methodology (see Bramley and Black, 2008; Bramley, Gill and Black, 2008) could enable a more diverse range of judges to take part. The key advantage of the Thurstone method for our purposes is that it enables two examinations to be equated via judges making direct comparisons of scripts from both examinations, and does not depend on the judges' internal conceptions of the standard required for any grade. A General Certificate of Education (GCE) Advanced Subsidiary (AS) unit in biology provided the context for the study reported here. The June 2007 and January 2008 examinations from this unit were equated using paired comparison data from the following four groups of judges: members of the existing Awarding Committee; other examiners that had marked the scripts operationally; teachers that had taught candidates for the examinations but not marked them; and university lecturers that teach biology to first year undergraduates. We found very high levels of intra-group and inter-group reliability for the scales and measures estimated from all four groups' judgements. When boundary marks for January 2008 were estimated from the equated June 2007 boundaries, there was considerable agreement between the estimates made from each group's data. Indeed for four of the boundaries (grades B, C, D and E), the estimates from the Awarders', examiners' and lecturers' data were no more than one mark apart, and none of the estimates were more than three marks apart. We concluded that the examiners, teachers, lecturers and members of the current Awarding Committee made very similar judgments, and members of all four groups could take part in a paired comparison exercise for setting grade boundaries without compromising reliability.Item Open Access Published version Peer-reviewed De-mystifying the role of the uniform mark in assessment practice: concepts, confusions and challenges(Research Division, Cambridge University Press & Assessment, 2009-01-01) Gray, Elizabeth; Shaw, StuartThe search for an adequate conceptualisation of the Uniform Mark Scale (UMS) is a challenging one and it is clear that there is a need to broaden current discussions of the issues involved. This article marks an attempt to demystify the UMS; its conception and operation. Although the article assumes a basic appreciation of the terminology and processes associated with the examination system, it explicates through a number of case study scenarios, the contexts in which it is appropriate to employ UMS, describes any necessary computations arising from different specifications and assessment scenarios, and addresses some of the potential challenges posed by the calculation of grades for unitised specifications. A specification here refers to a comprehensive description of a qualification and includes both obligatory and optional features: content, and any performance requirements. If a specification is unitised, the constituent units can be separately delivered, assessed and certificated. Having a clear and well-articulated position on the underlying theory of UMS is necessary to demonstrate transparency with regard to the estimation of aggregate performance on unitised assessments and to support any claims we wish to make about the reporting process. It is hoped that the issues addressed here will make a positive contribution to the widening nature of the UMS debate (both within and beyond Cambridge Assessment) more generally, and of the understanding, operation and employment of UMS, in particular.Item Open Access Published version Peer-reviewed Can emotional and social abilities predict differences in attainment at secondary school?(Research Division, Cambridge University Press & Assessment, 2009-01-01) Vidal Rodeiro, Carmen; Bell, John; Emery, JoanneTrait emotional intelligence (trait EI) covers a wide range of self-perceived skills and personality dispositions such as motivation, confidence, optimism, peer relations and coping with stress. In recent years, there has been a growing awareness that social and emotional factors play an important part in students' academic success and it has been claimed that those with high scores on a trait EI measure perform better. This research investigated whether scores on a questionnaire measure of trait EI were related to school performance in a sample of British pupils. Trait EI was measured with the Trait Emotional Intelligence Questionnaire. Participants completed the questionnaire prior to the June 2007 examination session and their responses were matched to their Key Stage 3 and GCSE results. The research showed that some aspects of trait EI (motivation and low impulsivity) as well as total trait EI were significant predictors of attainment in GCSE subjects after controlling for prior attainment at school.Item Open Access Published version Peer-reviewed Assessment instruments over time(Research Division, Cambridge University Press & Assessment, 2009-01-01) Elliott, Gill; Curcin, Milja; Bramley, Tom; Ireland, Jo; Gill, Tim; Black, BethAs Cambridge Assessment celebrated its 150th anniversary in 2008 members of the Evaluation and Psychometrics Team looked back at question papers over the years. Details of the question papers and examples of questions were used to illustrate the development of seven subjects: Mathematics, Physics, Geography, Art, French, Cookery and English Literature. Two clear themes emerged from the work across most subjects - an increasing emphasis on real-world contexts in more recent years and an increasing choice of topic areas and question/component options available to candidates.Item Open Access Published version Peer-reviewed All the right letters – just not necessarily in the right order. Spelling errors in a sample of GCSE English scripts(Research Division, Cambridge University Press & Assessment, 2009-01-01) Elliott, Gill; Johnson, NatFor the past ten years, Cambridge Assessment has been running a series of investigations into features of GCSE English candidates' writing - the Aspects of Writing study (Massey et al., 1996, Massey et al., 2005). The studies have sampled a fragment of writing taken from the narrative writing of 30 boys and 30 girls at every grade at GCSE. Features investigated have included the correct and incorrect use of various forms of punctuation, sophistication of vocabulary, non-standard English, sentence types and the frequency of spelling errors. This paper provides a more detailed analysis of the nature of the spelling errors identified in the sample of work obtained for the Aspects of Writing project from unit 3 (Literary heritage and Imaginative Writing) of the 2004 OCR GCSE examination in English. Are there certain types of spelling error which occur more frequently than others? Do particular words crop up over and over again? How many errors relate to well-known spelling rules, such as "I before E except after C"? The study identified 345 spelling errors in 11,730 words written, and these were reported in Massey et al. (2005), with a comparison by grade with samples of writing from 1980, 1993 and 1994. It was shown that a considerable decline in spelling in the early 1990s (compared with 1980) had been halted, and at the lower grades, improved. Since then, we have conducted a detailed analysis of the 345 misspelled words to see if there is evidence of particular types of error. Each misspelling has been categorised, and five broad types of error identified. These are sound-based errors, rules-based errors, errors of commission, omission and transposition, writing errors and multiple errors. This paper will present a detailed examination of the misspellings and the process of developing the categorisation system used. A number of words - woman, were, where, watch(ing), too and the homophones there/their and knew/new are identified as being the most frequently misspelled words. Implications for the findings upon teaching and literacy policy are discussed.