Repository logo
 

Optimising virtual reality training in industry using crowdsourcing

Published version
Peer-reviewed

Loading...
Thumbnail Image

Change log

Abstract

The ability of Immersive Virtual Reality (IVR) to induce any training scenario in a safe and scalable manner makes it a particularly interesting technology for virtual learning factories. However, both an opportunity and a challenge is to empirically test and optimise virtual environments. Conducting scientifically robust in-person experiments is often not feasible using traditional approaches, given limited resources of training providers and near limitless opportunities to design virtual training environments. Distributed crowdsourcing techniques using Desktop Virtual Reality (DVR) with a PC may offer an alternative and more scalable approach to experimentally test and optimise virtual environments. An interesting question is therefore if such approaches using DVR are a suitable alternative to current experimental designs to enable large-scale optimisation in contexts such as virtual learning factories. While crowdsourcing has been validated for its suitability in several research applications and domains, there is limited research available on training and, to the best of our knowledge, no previous research has evaluated the suitability of crowdsourcing to optimise immersive training in industrial or learning factory contexts. With our paper we contribute the first experiment to address this research gap. Our hypothesis is that crowdsourcing is a suitable technique for IVR training optimisation if it yields equivalent results to traditional experimentation at every training optimisation level. To test this hypothesis we designed an industrial learning experiment to evaluate key performance and affective indicators of IVR training at three levels of optimisation. The experiment was conducted using traditional and crowdsourcing techniques. The results show that crowdsourcing can be a suitable alternative to traditional optimisation techniques depending on: (1) the desired operative mental state of the participants, (2) the investigated key performance indicators, and (3) the kind of optimisation performed. We contribute new data allowing important insights and an integrated training evaluation concept which can be applied when doing crowdsourcing studies.

Description

Keywords

Journal Title

Proceedings of the 12th Conference on Learning Factories (CLF 2022)

Conference Name

Conference on Learning Factories

Journal ISSN

Volume Title

Publisher

Publisher DOI

Publisher URL

Rights and licensing

Except where otherwised noted, this item's license is described as Attribution 4.0 International
Sponsorship
EPSRC (via University Of Lincoln) (EP/S023917/1)