Repository logo
 

Research Matters Special Issue 2

Browse

Recent Submissions

Now showing 1 - 9 of 9
  • ItemOpen AccessPublished version Peer-reviewed
    Research Matters Special Issue 2: Comparability
    (Research Division, Cambridge University Press & Assessment, 2011-10-01) Bramley, Tom
    In this Special Issue of Research Matters we present some of Cambridge Assessment’s recent thinking about comparability.
  • ItemOpen AccessPublished version Peer-reviewed
    The pitfalls and positives of pop comparability
    (Research Division, Cambridge University Press & Assessment, 2011-10-01) Rushton, Nicky; Haigh, Matt; Elliott, Gill
    The media debate about standards in public examinations has become an August ritual. The debate tends to be polarised with reports of 'slipping standards' at odds with those claiming that educational prowess has increased. Some organisations have taken matters into their own hands, and have carried out their own studies investigating this. Some of these are similar to academic papers; others are closer in nature to a media campaign. In the same way as 'pop psychology' is a term used to describe psychological concepts which attain popularity amongst the wider public, so 'pop comparability' can be used to describe the evolution of a lay-person's view of comparability. Studies, articles or programmes which influence this wider view fall into this category and are often accessed by a much larger audience than academic papers. In this article, five of these studies are considered: Series 1 of the televised social experiment "That'll Teach 'em"; The Royal Society of Chemistry's Five-Decade Challenge; the Guardian's and the Times' journalists (re)sitting examinations to experience their difficulty; a feature by the BBC Radio 4 programme, 'Today' (2009), where students discussed exam papers from 1936; and a book of O level past papers and an associated newspaper article which described students' experiences of sitting the O level exams.
  • ItemOpen AccessPublished version Peer-reviewed
    The challenges for ensuring year-on-year comparability when moving from linear to unitised schemes at GCSE
    (Research Division, Cambridge University Press & Assessment, 2011-10-01) Forster, Mike
    In September 2009 new unitised specifications were introduced in England. These specifications were to be assessed in a modular way, throughout the course of study, rather than in a linear way (at the end of the course). At that point in time, OCR had a number of specifications that had been run in a unitised way for a number of years, and so were able to use this information to investigate the impact of unitisation. This meant we were able to look at the impact of resits; the terminal requirement (where 40% of the course had to be assessed at the end of the course); the trade-off between maturity and the bite-size (and hence smaller, more spread-out) nature of the assessments; the variation of unit and subject grades; and the impact of introducing a uniform mark scale, so that marks from different assessment series could be combined fairly. Furthermore, we 'unitised' a number of existing linear specifications to look at the impact unitisation might have on the stability of outcomes. This paper summarises the outcome of these investigations.
  • ItemOpen AccessPublished version Peer-reviewed
    Subject difficulty - the analogy with question difficulty
    (Research Division, Cambridge University Press & Assessment, 2011-10-01) Bramley, Tom
    This article explores in depth one particular way of defining and measuring subject difficulty - the 'IRT approach'. First the IRT approach is briefly described. Then the analogy of using the IRT approach when the 'items' are examination subjects is explored. Next the task of defining difficulty from first principles is considered, starting from the simplest case of comparing two dichotomous items within a test. Finally, an alternative to the IRT approach, based on producing visual representations of differences in difficulty among just a few (three or four) examinations, is offered as an idea for future exploration.
  • ItemOpen AccessPublished version Peer-reviewed
    Linking assessments to international frameworks of language proficiency: the Common European Framework of Reference
    (Research Division, Cambridge University Press & Assessment, 2011-10-01) Jones, Neil
    Cambridge ESOL, the exam board within Cambridge Assessment which provides English language proficiency tests to 3.5 million candidates a year worldwide, uses the Common European Framework of Reference for Languages (CEFR) as an essential element of how we define and interpret exam levels. Many in the UK who are familiar with UK language qualifications may still be unfamiliar with the CEFR, because most of these qualifications pay little attention to proficiency - how well a GCSE grade C candidate can actually communicate in French, for example, or whether this is comparable with the same grade in German. The issues of comparability which the CEFR addresses are thus effectively different in kind from those that occupy schools exams in the UK, even if the comparisons made - over time, or across subjects - sound on the face of it similar. This article offers a brief introduction to the CEFR for those unfamiliar with it.
  • ItemOpen AccessPublished version Peer-reviewed
    Comparing different types of qualifications: an alternative comparator
    (Research Division, Cambridge University Press & Assessment, 2011-10-01) Greatorex, Jackie
    Returns to qualifications is a statistical measure of how much more is earned on average by people with a particular qualification compared to people with similar demographic characteristics who do not have the qualification. Awarding bodies and the national regulator do not generally use this research method in comparability studies, although they are prominent in government reviews of qualifications. This article considers what returns to qualifications comparability research can offer awarding bodies. This comparator enables researchers to make comparisons which cannot be achieved by other methods, for instance, comparisons between different types of qualifications, occupations, sectors and progression routes. It has the advantage that it is more independent than customary comparators used in many comparability studies. As with all research approaches, returns to qualifications has strengths and weaknesses, but provides some robust comparability evidence. The strongest comparability evidence is when there is a clear pattern in the results of several studies using different established research methods and independent data sets. Therefore results from returns to qualifications research combined with results from the customary comparators would provide a strong research evidence base.
  • ItemOpen AccessPublished version Peer-reviewed
    A level pass rates and the enduring myth of norm-referencing
    (Research Division, Cambridge University Press & Assessment, 2011-10-01) Newton, Paul
    This article defines norm-referencing (the level of attainment of a particular student in relation to the level of attainment of all other students who sat the same examination); criterion-referencing (identifying exactly what students can and cannot do in each sub-domain of the subject being examined); and attainment-referencing (judging students on the basis of their overall level of attainment in the curriculum area being examined). It argues that A levels have never been norm-referenced or criterion-referenced but have always been attainment-referenced. This is counter to the mythology of A level examining, in which standards were norm-referenced from the 1960s to the middle of the 1980s, after which they became criterion-referenced.
  • ItemOpen AccessPublished version Peer-reviewed
    A guide to comparability terminology and methods
    (Research Division, Cambridge University Press & Assessment, 2011-10-01) Elliott, Gill
    Comparability is a complex and challenging area for educational researchers, particularly those who have little experience of it. This article seeks to provide a short and accessible introduction to the area. As such it includes discussion of the holism of the topic and how to distinguish between definitions and methods and a glossary of key terms. Core to the article is a list of different methods which have been used when investigating comparability issues in the educational assessment literature. Each method is briefly described, with examples of contexts and definitions which have been applied. The article also includes a short summary of some of the key themes in the literature and discussion of how these themes relate to one another. The key aim of this paper is to help researchers come to a better shared understanding of the concepts and issues which form the interwoven web of concepts which characterises comparability.
  • ItemOpen AccessPublished version Peer-reviewed
    100 years of controversy over standards: an enduring problem
    (Research Division, Cambridge University Press & Assessment, 2011-10-01) Elliott, Gill
    This article looks back at the history of comparability in the English assessment system by examining, in detail, the findings of some of the key reports held in Cambridge Assessment's Group Archive. Of especial interest were the 1911 Consultative committee report upon Examinations in Secondary schools and the 1943 Norwood report Curriculum and Examinations in Secondary Schools. When considered alongside other, more recent literature, the insights from these papers provided a window through which to explore the ways in which theories of comparability have developed and different viewpoints have emerged. Key themes which are explored within the article include the changing, and confusing, use of terminology; the role that the purpose of the qualifications plays in determining comparability issues, and the issue of qualifications evolving and subsequently producing new comparability challenges. Some brief, but fascinating, facts and figures about very early comparability studies are also included.