Repository logo
 

Research Matters 23

Browse

Recent Submissions

Now showing 1 - 6 of 6
  • ItemOpen AccessPublished version Peer-reviewed
    Research Matters 23: Spring 2017
    (Research Division, Cambridge University Press & Assessment, 2017-03-01) Green, Sylvia
    Research Matters is a free biannual publication which allows Cambridge University Press & Assessment to share its assessment research, in a range of fields, with the wider assessment community. 
  • ItemOpen AccessPublished version Peer-reviewed
    Tweeting about exams: Investigating the use of social media over the summer 2016 session
    (Research Division, Cambridge University Press & Assessment, 2017-03-01) Sutch, Tom; Klir, Nicole
    In recent years, social media discussion of particular GCSE and A level exams and questions has led to coverage in the national media. Using exam-related tweets collected from Twitter in real time, we investigated the extent of this phenomenon, the topics being discussed and the sentiments being expressed. We quantified sentiment by monitoring the occurrence of popularly used emoji within the tweets. We found that the overall volume of tweets followed weekly and daily patterns, with activity peaking in the periods just before and after exams. Discussion of particular subjects was concentrated on days when relevant exams took place. When we focused on the Mathematics GCSE papers sat on a particular day, we were able to identify several distinct phases based on the words and emoji used in tweets: discussion switched from revision to wishing others luck before the exam, then reflecting on performance and discussing individual questions afterwards.
  • ItemOpen AccessPublished version Peer-reviewed
    The clue in the dot of the ‘i’: Experiments in quick methods for verifying identity via handwriting
    (Research Division, Cambridge University Press & Assessment, 2017-03-01) Benton, Tom
    This article demonstrates some simple and quick techniques for comparing the style of handwriting between two exams. This could potentially be a useful way of checking that the same person has taken all of the different components leading to a qualification and form one part of the effort to ensure qualifications are only awarded to those candidates that have personally completed the necessary assessments. The advantage of this form of identity checking is that it is based upon data (in the form of images) that is already routinely stored as part of the process of on-screen marking. This article shows that some simple metrics can quickly identify candidates whose handwriting shows a suspicious degree of change between occasions. However, close scrutiny of some of these scripts provides some reasons for caution in assuming that all cases of changing handwriting represent the presence of imposters. Some cases of apparently different handwriting also include aspects that indicate they may come from the same author. In other cases, the style of handwriting may change even within the same examination response.
  • ItemOpen AccessPublished version Peer-reviewed
    Evaluating blended learning: Bringing the elements together
    (Research Division, Cambridge University Press & Assessment, 2017-03-01) Bowyer, Jess; Chambers, Lucy
    This article provides a brief introduction to blended learning, its benefits and factors to consider when implementing a blended learning programme. It then concentrates on how to evaluate a blended learning programme and describes a number of published evaluation frameworks. There are numerous frameworks and instruments for evaluating blended learning, although no particular one seems to be favoured in the literature. This is partly due to the diversity of reasons for evaluating blended learning systems, as well as the many intended audiences and perspectives for these evaluations. The article concludes by introducing a new framework which brings together many of the constructs from existing frameworks whilst adding new elements. It is aim is to encompass all aspects of the blended learning situation to permit researchers and evaluators to easily identify the relationships between the different elements whilst still enabling focussed and situated evaluation.
  • ItemOpen AccessPublished version Peer-reviewed
    An analysis of the effect of taking the EPQ on performance in other Level 3 qualifications
    (Research Division, Cambridge University Press & Assessment, 2017-03-01) Gill, Tim
    The Extended Project Qualification (EPQ) is a stand-alone qualification taken by sixth form students. It involves undertaking a substantial project, where the outcome can range from writing a dissertation or report to putting on a performance. It is possible that some of the skills learnt by students whilst undertaking their project (e.g. independent research, problem-solving) could help them in other qualifications taken at the same time. Two separate investigations were undertaken: firstly, the performance of individual students was analysed, using a multilevel regression model to compare EPQ and non-EPQ students. The results showed that there was a small, but statistically significant effect, with those taking EPQ achieving better results on average in their A levels. The second investigation analysed performance at school level, using a regression to model the effect of increasing the percentage of students in a school taking EPQ. The results showed a significant and positive effect of increasing the percentage of students taking EPQ. However, the effect was very small.
  • ItemOpen AccessPublished version Peer-reviewed
    A review of instruments for assessing complex vocational competence
    (Research Division, Cambridge University Press & Assessment, 2017-03-01) Greatorex, Jackie; Johnson, Martin; Coleman, Victoria
    The aim of the research was to explore the measurement qualities of checklists and Global Rating Scales [GRS] in the context of assessing complex competence. Firstly, we reviewed the literature about the affordances of human judgement and the mechanical combination of human judgements. Secondly, we reviewed examples of checklists and GRS which are used to assess complex competence in highly regarded professions. These examples served to contextualise and elucidate assessment matters. Thirdly, we compiled research evidence from the outcomes of systematic reviews which compared advantages and disadvantages of checklists and GRS. Together the evidence provides a nuanced and firm basis for conclusions. Overall, literature shows that mechanical combination can outperform the human integration of evidence when assessing complex competence, and that therefore a good use of human judgements is in making decisions about individual traits, which are then mechanically combined. The weight of evidence suggests that GRS generally achieve better reliability and validity than checklists, but that a high quality checklist is better than a poor quality GRS. The review is a reminder that including assessors in designing assessment instruments processes can helps to maximise manageability.