Learning Socially Appropriate Robo-waiter Behaviours through Real-time User Feedback
Accepted version
Peer-reviewed
Repository URI
Repository DOI
Change log
Authors
Abstract
Current Humanoid Service Robot (HSR) behaviours mainly rely on static models that cannot adapt dynamically to meet individual customer attitudes and preferences. In this work, we focus on empowering HSRs with adaptive feedback mechanisms driven by either implicit reward, by estimating facial affect, or explicit reward, by incorporating verbal responses of the human ‘customer’. To achieve this, we first create a custom dataset, annotated using crowd-sourced labels, to learn appropriate approach (positioning and movement) behaviours for a Robo-waiter. This dataset is used to pre-train a Reinforcement Learning (RL) agent to learn behaviours deemed socially appropriate for the robo-waiter. This model is later extended to include separate implicit and explicit reward mechanisms to allow for interactive learning and adaptation from user social feedback. We present a within-subjects Human-Robot Interaction (HRI) study with 21 participants implementing interactions between the robo-waiter and human customers implementing the above-mentioned model variations. Our results show that both explicit and implicit adaptation mechanisms enabled the adaptive robo-waiter to be rated as more enjoyable and sociable, and its positioning relative to the participants as more appropriate compared to using the pre-trained model or a randomised control implementation.
Description
Keywords
Journal Title
Conference Name
Journal ISSN
Volume Title
Publisher
Publisher DOI
Rights
Sponsorship
Engineering and Physical Sciences Research Council (EP/R030782/1)