Repository logo
 

Evaluating an automated number series item generator using linear logistic test models

Published version
Peer-reviewed

Type

Article

Change log

Authors

Loe, BS 
Simonfy, F 
Doebler, P 

Abstract

This study investigates the item properties of a newly developed Automatic Number Series Item Generator (ANSIG). The foundation of the ANSIG is based on five hypothesised cognitive operators. Thirteen item models were developed using the numGen R package and eleven were evaluated in this study. The 16-item ICAR (International Cognitive Ability Resource1) short form ability test was used to evaluate construct validity. The Rasch Model and two Linear Logistic Test Model(s) (LLTM) were employed to estimate and predict the item parameters. Results indicate that a single factor determines the performance on tests composed of items generated by the ANSIG. Under the LLTM approach, all the cognitive operators were significant predictors of item difficulty. Moderate to high correlations were evident between the number series items and the ICAR test scores, with high correlation found for the ICAR Letter-Numeric-Series type items, suggesting adequate nomothetic span. Extended cognitive research is, nevertheless, essential for the automatic generation of an item pool with predictable psychometric properties

Description

Keywords

Linear Logistic Test Models, Rasch model, automatic item generation, cognitive models, number series

Journal Title

Journal of Intelligence

Conference Name

Journal ISSN

2079-3200
2079-3200

Volume Title

6

Publisher

Inderscience
Sponsorship
Economic and Social Research Council (ES/L016591/1)