Repository logo
 

Item response theory, computer adaptive testing and the risk of self-deception

Published version
Peer-reviewed

Change log

Authors

Benton, Tom 

Abstract

Computer adaptive testing is intended to make assessment more reliable by tailoring the difficulty of the questions a student has to answer to their level of ability. Most commonly, this benefit is used to justify the length of tests being shortened whilst retaining the reliability of a longer, non-adaptive test.

Improvements due to adaptive testing are often estimated using reliability coefficients based on item response theory (IRT). However, these coefficients assume that the underlying IRT model completely fits the data. This article takes a different approach, based on comparing the predictive value of shortened versions of real assessments based on adaptive and non-adaptive approaches. The results show that, when explored in this way, the benefits from adaptive testing may not always be quite a large as hoped.

Description

Keywords

Psychology of assessment, Computer-based assessment

Journal Title

Research Matters

Conference Name

Journal ISSN

Volume Title

Publisher

Research Division, Cambridge University Press & Assessment

Publisher DOI

Publisher URL

Relationships