Research Matters 24


Recent Submissions

Now showing 1 - 7 of 7
  • ItemPublished versionOpen Access
    Research Matters 24: Autumn 2017
    (Research Division, Cambridge University Press & Assessment, 2017-10-01) Bramley, Tom
    Research Matters is a free biannual publication which allows Cambridge University Press & Assessment to share its assessment research, in a range of fields, with the wider assessment community. 
  • ItemPublished versionOpen Access
    Utilising technology in the assessment of collaboration: A critique of PISA’s collaborative problem-solving tasks
    (Research Division, Cambridge University Press & Assessment, 2017-10-01) Shaw, Stuart; Child, Simon
    This article presents the outcomes of an exercise which we conducted to map the assessment approach of PISA 2015 to pertinent facets of the collaborative process, and recent theoretical developments related to engenderment of collaboration within assessment tasks. PISA's assessment of collaborative problem-solving was mapped onto six facets of collaboration identified in a recent review of the literature (Child & Shaw, 2016) and five elements of task design that were identified in the previous review as contributing to the optimal engenderment of collaborative activity. The mapping approach afforded the opportunity to investigate in detail the advantages and disadvantages of PISA's approach to the use of technology in their assessment of collaboration. The present article's critique of PISA could lead to future work that analyses the elements of the process of collaboration that have been targeted effectively, and areas for future improvement. This will be of interest to awarding organisations and others that are looking to develop qualifications in this important twenty-first century skill.
  • ItemPublished versionOpen Access
    Undergraduate Mathematics students’ views of their pre-university mathematical preparation
    (Research Division, Cambridge University Press & Assessment, 2017-10-01) Darlington, Ellie; Bowyer, Jess
    In response to planned reforms to A levels, undergraduate mathematicians who had taken AS or A level Further Mathematics were surveyed. A total of 928 mathematics undergraduates at 42 British universities responded to an online questionnaire regarding their experiences of A level Mathematics and Further Mathematics, and their mathematical preparedness for undergraduate study. Students' responses suggest that Further Mathematics is a worthwhile qualification for undergraduate Mathematics applicants to take, in terms of the mathematical background with which it provides students. Participants described Further Pure Mathematics most favourably of all of the optional strands of study within Further Mathematics, with Statistics and Mechanics receiving positive feedback and Decision Mathematics receiving negative feedback. Participants who were not required to have taken Further Mathematics in order to be accepted onto their university course were generally more enthusiastic about their experience of it and of their perceptions of its usefulness than those who were required to have taken it. This suggests that it would be beneficial for prospective undergraduate mathematicians to study A level Further Mathematics, regardless of whether or not the universities they apply to require it for entry.
  • ItemPublished versionOpen Access
    Question selection and volatility in schools’ Mathematics GCSE results
    (Research Division, Cambridge University Press & Assessment, 2017-10-01) Crawford, Cara
    This research estimated the extent to which volatility in schools' scores may be attributable to changes in the selection of questions in exam papers. This question was addressed by comparing candidates' performance on two halves of the same assessment. Once student grades were calculated for each half-test, these were aggregated within each school to form school-level outcomes for each half-test (e.g., percentage of students with a grade C or above). Comparing the variation in schools' outcomes for their students' performance on two parts of a single test should give us some idea of the amount of variation in actual year-to-year results that could be due to changes in test questions. This process was applied to a Mathematics GCSE and the results suggest that the exact choice of questions in an exam have only a small impact on school-level results.
  • ItemPublished versionOpen Access
    Partial absences in GCSE and AS/A level examinations
    (Research Division, Cambridge University Press & Assessment, 2017-10-01) Vidal Rodeiro, Carmen
    There are certain situations in which a candidate does not have a mark for a component/unit in a GCSE or AS/A level examination. For example, if they were ill on the day of the exam, if their paper was lost, or if their controlled assessment was invalid as a result of individual or centre malpractice. Subject to certain rules, the awarding body can calculate an estimated mark for the component/unit with the missing mark to enable the candidate to certificate, rather than having to wait for the next assessment opportunity. This article explores the use of statistical methods for handling missing data, specifically regression imputation, to estimate the mark for a missing unit/component in GCSE and AS/A level qualifications. The marks (and grades) obtained in this way are compared with the marks (and grades) obtained applying two different methods currently used by some of the awarding boards in England: the z-score method and the percentile (cum% position) method.
  • ItemPublished versionOpen Access
    On the reliability of applying educational taxonomies
    (Research Division, Cambridge University Press & Assessment, 2017-10-01) Coleman, Victoria
    Educational taxonomies are classification schemes that organise thinking skills according to their level of complexity, providing a unifying framework and common terminology. They can be used to analyse and design educational materials, analyse students' levels of thinking and analyse and ensure alignment between learning objectives and corresponding assessment materials. There are numerous educational taxonomies that have been created and this article reviews studies that have examined their reliability, in particular Bloom's was a frequently used taxonomy. It was found that there were very few studies specifically examining the reliability of educational taxonomies. Furthermore, where reliability was measured, this was primarily inter-rater reliability with very few studies discussing intra-rater reliability. Many of the studies reviewed provided only limited information about how reliability was calculated and the type of reliability measure used varied greatly between studies. Finally, this article also highlights factors that influence reliability and that therefore offer potential avenues for improving reliability when using educational taxonomies, including training and practice, the use of expert raters, and the number of categories in a taxonomy. Overall it was not possible to draw conclusions about the reliability of specific educational taxonomies and it seems that the field would benefit from further targeted studies about their reliability.
  • ItemPublished versionOpen Access
    How much do I need to write to get top marks?
    (Research Division, Cambridge University Press & Assessment, 2017-10-01) Benton, Tom
    This article looks at the relationship between how much candidates write and the grade they are awarded in an English Literature GCSE examination. Although such analyses are common within computer-based testing, far less had been written about this relationship for traditional exams taken with a pen and paper. This article briefly describes how we estimated word counts based on images of exam scripts, validates the method against a short answer question from a Biology examination, and then uses the method to examine how the length of candidates' English Literature essays in an exam relate to the grade they were awarded. It shows that candidates awarded a grade A* wrote around 700 words on average in a 45-minute exam - an average rate of 15-words per minute across the period. In contrast, grade E candidates who produced around 450 words - an average rate of 10-words per minute. Whilst it cannot be emphasised strongly enough that performance in GCSEs is judged by what students write and not how much, the results of this research may help students facing examinations have a reasonable idea of the kind of length that is generally expected.