Research Matters 01

Browse

Recent Submissions

Now showing 1 - 8 of 8
  • ItemPublished versionOpen Access
    Research Matters 1: September 2005
    (Research Division, Cambridge University Press & Assessment, 2005-09-01) Green, Sylvia
    Research Matters is a free biannual publication which allows Cambridge University Press & Assessment to share its assessment research, in a range of fields, with the wider assessment community. 
  • ItemPublished versionOpen Access
    Gold standards and silver bullets: assessing high attainment
    (Research Division, Cambridge University Press & Assessment, 2005-09-01) Bell, John
    One of the challenges facing those involved in the assessment and selection of high attainers is the fact that so many students get the same high grades (in measurement theory this is referred to as a lack of discrimination). This article discusses some of the issues and methods used in identifying (in order to select) high-attaining candidates.
  • ItemPublished versionOpen Access
    Comparability of national tests overtime: a project and its impact
    (Research Division, Cambridge University Press & Assessment, 2005-09-01) Massey, Alf
    This article summarises the findings and discusses the impact of The Comparability Over Time (CoT) Project, which was commissioned by the Qualifications and Curriculum Authority (QCA) in 1999 and published in 2003. The project investigated the stability of national test standards at all key stages and in all subjects. National test standards are of considerable public interest, not least because of the political prominence these tests have been accorded, including government claims that the huge improvements in results since tests were introduced in the mid-1990s stem from the plethora of recent educational policy initiatives.
  • ItemPublished versionOpen Access
    Can a picture ruin a thousand words? The effects of visual resources and layout in examination questions
    (Research Division, Cambridge University Press & Assessment, 2005-09-01) Crisp, Vicki; Sweiry, Ezekiel
    Visual resources, such as pictures, diagrams and photographs, can sometimes influence students' understanding of an examination question and their responses (Fisher-Hoch, Hughes and Bramley, 1997). If visual resources do have a disproportionately large influence on the development of mental models, this has implications in examinations where students' ability to process material effectively is already compromised by test anxiety (Sarason, 1988). Students need to understand questions in the way intended in order to have a fair opportunity to display their knowledge and skills. This research explored the effects of visual resources in a number of exam questions. 525 students, aged 16 years, sat an experimental science test under examination conditions. The test included six questions involving graphical or layout elements. For most of the questions, two versions were constructed in order to investigate the effects of changes to visual resources on processing and responses. Some of the students were interviewed after they had taken the test. The analysis of the example questions in this study, along with others the authors have studied, suggest that two variables in particular play a decisive role in the effect of visual resources on the way examination questions are processed and answered. The first of these is the relative salience or prominence of the key elements. Secondly, the student must believe that the element is relevant to the answer. One factor in determining this is past test experience, which provides expectations regarding under what circumstances visual resources are relevant.
  • ItemPublished versionOpen Access
    Automatic marking of short, free text responses
    (Research Division, Cambridge University Press & Assessment, 2005-09-01) Sukkarieh, Jana; Pulman, Stephen; Raikes, Nick
    Many of UCLES' academic examinations make extensive use of questions that require candidates to write one or two sentences. With increasing penetration of computers into schools and homes, a system that could partially or wholly automate valid marking of short, free text answers typed into a computer would be valuable, but would seem to pre-suppose a currently unattainable level of performance in automated natural language understanding. However, recent developments in the use of so-called 'shallow processing' techniques in computational linguistics have opened up the possibility of being able to automate the marking of free text without having to create systems that fully understand the answers. With this in mind, UCLES funded a three year study at Oxford University. Work began in summer 2002, and in this paper we introduce the project and the information extraction techniques used. A further paper in a forthcoming issue of Research Matters will contain the results of our evaluation of the automatic marks produced by the final system.
  • ItemPublished versionOpen Access
    Accessibility, easiness and standards
    (Research Division, Cambridge University Press & Assessment, 2005-09-01) Bramley, Tom
    This article is a summary of an article published in Educational Research in 2005. Discussions about whether one year’s test is easier or more difficult than the previous year’s test can often get bogged down when the spectre of ‘accessibility’ raises its head. Is a ‘more accessible’ test the same as an ‘easier’ test? Are there any implications for where the cut-scores should be set if a test is deemed to be more accessible, as opposed to more easy? Is there any way to identify questions which are ‘inaccessible’? The main purpose of the article was to use a psychometric approach to attempt to answer these questions.
  • ItemPublished versionOpen Access
    A review of research about writing and using grade descriptors in GCSEs and A levels
    (Research Division, Cambridge University Press & Assessment, 2005-09-01) Greatorex, Jackie
    This article describes current awarding practice and reviews literature about writing and using grade descriptors for GCSEs and A levels. Grade descriptors are descriptions of the qualities anticipated at various levels of a candidates’ performance in an assessment. It is concluded that it is good practice to write grade descriptors based on empirical evidence. Grade descriptors for different domains and types of questions can be written by: 1. identifying questions where there is a statistically significant difference between the performance of students who achieve adjacent grades (e.g. A and B); 2. using Kelly's Repertory Grid to interview examiners about the qualities which distinguish performance at these grades; 3. including these distinguishing qualities in grade descriptors. Furthermore, there is little research about how grade descriptors are used, or could be used, in preparing pupils for assessments, and there is room for further research in this area.
  • ItemPublished versionOpen Access
    A rank-ordering method for equating tests by expert judgement
    (Research Division, Cambridge University Press & Assessment, 2005-09-01) Bramley, Tom
    This article is a summary of an article published in the Journal of Applied Measurement in 2005. It builds on much research carried out at UCLES over the past ten years on the use of judgements in scale construction. It introduces an extension of Thurstone's paired comparison method to rankings of more than two objects, in the context of mapping a cut-score from one test to another.