Show simple item record

dc.contributor.authorLoe, Bao Sheng
dc.date.accessioned2019-02-26T14:35:07Z
dc.date.available2019-02-26T14:35:07Z
dc.date.issued2019-05-18
dc.date.submitted2018-07-31
dc.identifier.urihttps://www.repository.cam.ac.uk/handle/1810/289990
dc.description.abstractResearch has shown that the increased use of computer-based testing has brought about new challenges. With the ease of online test administration, a large number of items are necessary to maintain the item bank and minimise the exposure rate. However, the traditional item development process is time-consuming and costly. Thus, alternative ways of creating items are necessary to improve the item development process. Automatic Item Generation (AIG) is an effective method in generating items rapidly and efficiently. AIG uses algorithms to create questions for testing purposes. However, many of these generators are in the closed form, available only to the selected few. There is a lack of open source, publicly available generators that researchers can utilise to study AIG in greater depth and to generate items for their research. Furthermore, research has indicated that AIG is far from being understood, and more research into its methodology and the psychometric properties of the items created by the generators are needed for it to be used effectively. The studies conducted in this thesis have achieved the following: 1) Five open source item generators were created, and the items were evaluated and validated. 2) Empirical evidence showed that using a weak theory approach to develop item generators was just as credible as using a strong theory approach, even though they are theoretically distinct. 3) The psychometric properties of the generated items were estimated using various IRT models to assess the impact of the template features used to create the items. 4) Joint responses and response time modelling was employed to provide new insights into cognitive processes that go beyond those obtained by typical IRT models. This thesis suggests that AIG provides a tangible solution for improving the item development process for content generation and reducing the procedural cost of generating a large number of items, with the possibility of a unified approach towards test administration (i.e. adaptive item generation). Nonetheless, this thesis focused on rule-based algorithms. The application of other forms of item generation methods and the potential for measuring the intelligence of artificial general intelligence (AGI) is discussed in the final chapter, proposing that the use of AIG techniques create new opportunities as well as challenges for researchers that will redefine the assessment of intelligence.
dc.language.isoen
dc.rightsAll rights reserved
dc.subjectIntelligence
dc.subjectItem Response Theory
dc.subjectAutomatic Item Generation
dc.subjectExecutive Functioning
dc.subjectFactor Analysis
dc.subjectLinear Logistic Test Models
dc.titleThe effectiveness of automatic item generation for the development of cognitive ability tests
dc.typeThesis
dc.type.qualificationlevelDoctoral
dc.type.qualificationnameDoctor of Philosophy (PhD)
dc.publisher.institutionUniversity of Cambridge
dc.publisher.departmentPsychology
dc.date.updated2019-02-26T11:47:30Z
dc.identifier.doi10.17863/CAM.37218
dc.contributor.orcidLoe, Bao Sheng [0000-0001-6310-1608]
dc.publisher.collegeSelwyn College
dc.type.qualificationtitlePhD in Psychology
cam.supervisorRust, John
cam.supervisor.orcidRust, John [0000-0003-2598-3253]
cam.thesis.fundingfalse
rioxxterms.freetoread.startdate2400-01-01


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record