Repository logo
 

The hidden cost of using Amazon Mechanical Turk for research

Published version
Peer-reviewed

Type

Conference Object

Change log

Authors

Saravanos, A 
Zervoudakis, S 
Zheng, D 
Stott, N 
Hawryluk, B 

Abstract

In this study, we investigate the attentiveness exhibited by participants sourced through Amazon Mechanical Turk (MTurk), thereby discovering a significant level of inattentiveness amongst the platform’s top crowd workers (those classified as ‘Master’, with an ‘Approval Rate’ of 98% or more, and a ‘Number of HITS approved’ value of 1,000 or more). A total of 564 individuals from the United States participated in our experiment. They were asked to read a vignette outlining one of four hypothetical technology products and then complete a related survey. Three forms of attention check (logic, honesty, and time) were used to assess attentiveness. Through this experiment we determined that a total of 126 (22.3%) participants failed at least one of the three forms of attention check, with most (94) failing the honesty check – followed by the logic check (31), and the time check (27). Thus, we established that significant levels of inattentiveness exist even among the most elite MTurk workers. The study concludes by reaffirming the need for multiple forms of carefully crafted attention checks, irrespective of whether participant quality is presumed to be high according to MTurk criteria such as ‘Master’, ‘Approval Rate’, and ‘Number of HITS approved’. Furthermore, we propose that researchers adjust their proposals to account for the effort and costs required to address participant inattentiveness.

Description

Keywords

Journal Title

HCI International 2021 - Late Breaking Papers: Design and User Experience. HCII 2021. Lecture Notes in Computer Science

Conference Name

International Conference on Human-Computer Interaction

Journal ISSN

0302-9743
1611-3349

Volume Title

13094

Publisher

Springer International Publishing