Research Matters 10

Browse

Recent Submissions

Now showing 1 - 10 of 10
  • ItemPublished versionOpen Access
    Research Matters 10: June 2010
    (Research Division, Cambridge University Press & Assessment, 2010-06-01) Green, Sylvia
    Research Matters is a free biannual publication which allows Cambridge University Press & Assessment to share its assessment research, in a range of fields, with the wider assessment community. 
  • ItemPublished versionOpen Access
    Why use computer-based assessment in education? A literature review
    (Research Division, Cambridge University Press & Assessment, 2010-06-01) Haigh, Matt
    The aim of this literature review is to examine the evidence around the claims made for the shift towards computer-based assessment (CBA) in educational settings. In this examination of the literature a number of unevidenced areas are uncovered, and the resulting discussion provides the basis for suggested further research alongside practical considerations for the application of CBA.
  • ItemPublished versionOpen Access
    Towards an understanding of the impact of annotations on returned exam scripts
    (Research Division, Cambridge University Press & Assessment, 2010-06-01) Johnson, Martin; Shaw, Stuart
    There is little empirical study into practices around scripts returned to centres. Returned scripts often include information from examiners about the performance being assessed. As well as the total score given for the performance, additional information is carried in the form of the annotations left on the script by the marking examiner. Examiners' annotations have been the subject of a number of research studies (Crisp and Johnson, 2007; Johnson and Shaw, 2008; Johnson and Nadas, 2009) but as far as we know there has been no research into how this information is used by centres or candidates and whether it has any influence on future teaching and learning. This study set out to look at how teachers and students interact with examiners' annotations on scripts. This study used survey and interview methods to explore: 1. How do teachers and centres use annotations? 2. What is the scale of such use? 3. What importance is attached to the annotations? 4. What factors might influence the interpretation of the annotations?
  • ItemPublished versionOpen Access
    Response to Cambridge Assessment's seminar on Critical Thinking, February 2010
    (Research Division, Cambridge University Press & Assessment, 2010-06-01) Chislett, Joe
    In this article an experienced teacher of Critical Thinking discusses whether or not Critical Thinking could, or should, be 'embedded' into other subjects, rather than taught and assessed as a standalone subject in its own right.
  • ItemPublished versionOpen Access
    Must examiners meet in order to standardise their marking? An experiment with new and experienced examiners of GCE AS Psychology
    (Research Division, Cambridge University Press & Assessment, 2010-06-01) Raikes, Nick; Fidler, Jane; Gill, Tim
    When high-stakes examinations are marked by a panel of examiners, the examiners must be standardised so that candidates are not advantaged or disadvantaged according to which examiner marks their work. It is common practice for Awarding Bodies' standardisation processes to include a "Standardisation" or "Co-ordination" meeting, where all examiners meet to be briefed by the Principal Examiner and to discuss the application of the mark scheme in relation to specific examples of candidates' work. Research into the effectiveness of standardisation meetings has cast doubt on their usefulness, however, at least for experienced examiners. In the present study we addressed the following research questions: 1. What is the effect on marking accuracy of including a face-to-face meeting as part of an examiner standardisation process? 2. How does the effect on marking accuracy of a face-to-face meeting vary with the type of question being marked (short-answer or essay) and the level of experience of the examiners? 3. To what extent do examiners carry forward standardisation on one set of questions to a different but very similar set of questions?
  • ItemPublished versionOpen Access
    Is CRAS a suitable tool for comparing specification demands from vocational qualifications? s
    (Research Division, Cambridge University Press & Assessment, 2010-06-01) Greatorex, Jackie; Rushton, Nicky
    The aim of the research was to ascertain whether a framework of cognitive demands, known as CRAS, is a suitable tool for comparing the demands of vocational qualifications. CRAS was developed for use with academic examinations and may not tap into the variety of demands which vocational qualifications place on candidates. Data were taken from a series of comparability studies by awarding bodies and the national regulator. The data were the frameworks (often questionnaires) used to compare qualifications in these studies. All frameworks were mapped to CRAS. It was found that most aspects of the various frameworks mapped to an aspect of CRAS. However, there were demands which did not map to CRAS; these were mostly affective and interpersonal demands, such as working in a team. Affective and interpersonal domains are significant in vocational qualifications; therefore, using only CRAS to compare vocational qualifications is likely to omit key demands from the comparison.
  • ItemPublished versionOpen Access
    Developing and piloting a framework for the validation of A levels
    (Research Division, Cambridge University Press & Assessment, 2010-06-01) Shaw, Stuart; Crisp, Vicki
    Validity is a key principle of assessment, a central aspect of which relates to whether the interpretations and uses of test scores are appropriate and meaningful (Kane, 2006). For this to be the case, various criteria must be achieved, such as good representation of intended constructs, and avoidance of construct irrelevant variance. Additionally, some conceptualisations of validity include consideration of the consequences that may result from the assessment, such as effects on classroom practice. The kinds of evidence needed may vary depending on the intended uses of assessment outcomes. For example, if assessment results are designed to be used to inform decisions about future study or employment, it is important to ascertain that the qualification acts as suitable preparation for this study or employment, and to some extent predicts likely success. This article reports briefly on the development, piloting and revision of a framework and methodology for validating general academic qualifications such as A levels. The development drew on previously proposed frameworks for validation from the literature, and the resulting framework and set of methods were piloted with International A level Geography. This led to revisions to the framework and use with A level Physics.
  • ItemPublished versionOpen Access
    A tricky task for teachers: assessing pre-university students' research reports
    (Research Division, Cambridge University Press & Assessment, 2010-06-01) Suto, Irenka; Shaw, Stuart
    In the UK and internationally, many students preparing for university are given the challenge of conducting independent research and writing up a report of around 4,000 or 5,000 words. Such research activities provide students with opportunities to investigate a specialist area of study in greater depth, to cross boundaries with an inter-disciplinary enquiry, or to explore a novel non-school subject such as archaeology, cosmology or anthropology. In this study, we explored the feasibility of applying a single mark scheme to research reports covering diverse topics in order to reward generic research skills. Our aim was to investigate the reliability with which teachers can mark diverse research reports, using four different generic assessment objectives. We also investigated teachers' views in applying generic mark schemes, particularly when marking reports on unfamiliar topics. Our analyses indicated that marking reliability was good, though like almost all qualifications, imperfect. Possible reasons and explanations for marking difficulty related to subject knowledge, the clarity of student thought, and the overall level of student performance.
  • ItemPublished versionOpen Access
    A review of literature on item-level marker agreement: implications for on-screen marking monitoring research and practice
    (Research Division, Cambridge University Press & Assessment, 2010-06-01) Curcin, Milja
    This review article focuses mainly on the literature relevant for the inter-marker agreement aspect of marking reliability in the context of on-screen marking. The increasing use of on-screen in place of paper-based marking presents new possibilities for monitoring of marking and ensuring higher agreement levels, but also raises questions with respect to the most efficient and beneficial use of marker agreement information that is routinely collected in this process, both in monitoring practice and in research.
  • ItemPublished versionOpen Access
    "It's not like teaching other subjects" - the challenges of introducing Critical Thinking AS level in England
    (Research Division, Cambridge University Press & Assessment, 2010-06-01) Black, Beth
    This article focuses on the introduction of Critical Thinking AS level into schools in England. In 2001, 130 schools entered in total just over 2,000 candidates for the whole AS level. By 2009, this had increased to over 1000 schools entering over 22,000 candidates. However, candidate 'success' at Critical Thinking (in terms of proportion of grade As and passes) remained relatively low. This article explores three potential explanations for this.