Learning Socially Appropriate Robo-waiter Behaviours through Real-time User Feedback.

Conference Object
Change log
McQuillin, Emily 

Current Humanoid Service Robot (HSR) behaviours mainly rely on static models that cannot adapt dynamically to meet individual customer attitudes and preferences. In this work, we focus on empowering HSRs with adaptive feedback mechanisms driven by either implicit reward, by estimating facial affect, or explicit reward, by incorporating verbal responses of the human ‘customer’. To achieve this, we first create a custom dataset, annotated using crowd-sourced labels, to learn appropriate approach (positioning and movement) behaviours for a Robo-waiter. This dataset is used to pre-train a Reinforcement Learning (RL) agent to learn behaviours deemed socially appropriate for the robo-waiter. This model is later extended to include separate implicit and explicit reward mechanisms to allow for interactive learning and adaptation from user social feedback. We present a within-subjects Human-Robot Interaction (HRI) study with 21 participants implementing interactions between the robo-waiter and human customers implementing the above-mentioned model variations. Our results show that both explicit and implicit adaptation mechanisms enabled the adaptive robo-waiter to be rated as more enjoyable and sociable, and its positioning relative to the participants as more appropriate compared to using the pre-trained model or a randomised control implementation.

Reinforcement Learning, Humanoid Robo-waiter, Explicit Feedback, Implicit Feedback, Facial Affect
Journal Title
Conference Name
ACM/IEEE International Conference on Human-Robot Interaction
Journal ISSN
Volume Title
EPSRC (2107412)
Engineering and Physical Sciences Research Council (EP/R030782/1)
E. McQuillin was supported by the 2020/21 DeepMind Cambridge Scholarship. N. Churamani is funded by the EPSRC grant EP/R513180/1 (ref. 2107412). H. Gunes is supported by the EPSRC project ARoEQ under grant ref. EP/R030782/1.