Repository logo
 

Automatic analysis of facilitated taste-liking

Accepted version
Peer-reviewed

Type

Conference Object

Change log

Authors

Chen, Y 
Jie, Z 

Abstract

This paper focuses on: (i) Automatic recognition of taste-liking from facial videos by comparatively training and evaluating models with engineered features and state-of-the-art deep learning architectures, and (ii) analysing the classification results along the aspects of facilitator type, and the gender, ethnicity, and personality of the participants. To this aim, a new beverage tasting dataset acquired under different conditions (human vs. robot facilitator and priming vs. non-priming facilitation) is utilised. The experimental results show that: (i) The deep spatiotemporal architectures provide better classification results than the engineered feature models; (ii) the classification results for all three classes of liking, neutral and disliking reach F1 scores in the range of 71%-91%; (iii) the personality-aware network that fuses participants’ personality information with that of facial reaction features provides improved classification performance; and (iv) classification results vary across participant gender, but not across facilitator type and participant ethnicity.

Description

Keywords

Taste-liking, facial reactions, affective computing, engineered features, deep spatiotemporal networks, personality

Journal Title

ICMI 2020 Companion - Companion Publication of the 2020 International Conference on Multimodal Interaction

Conference Name

ICMI '20: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION

Journal ISSN

Volume Title

Publisher

ACM

Rights

All rights reserved
Sponsorship
Engineering and Physical Sciences Research Council (EP/R030782/1)
EPSRC