Repository logo
 

REACT2023: The First Multiple Appropriate Facial Reaction Generation Challenge

Accepted version
Peer-reviewed

Loading...
Thumbnail Image

Change log

Abstract

The Multiple Appropriate Facial Reaction Generation Challenge (REACT2023) is the first competition event focused on evaluating multimedia processing and machine learning techniques for generating human-appropriate facial reactions in various dyadic interaction scenarios, with all participants competing strictly under the same conditions. The goal of the challenge is to provide the first benchmark test set for multi-modal information processing and to foster collaboration among the audio, visual, and audio-visual behaviour analysis and behaviour generation (a.k.a generative AI) communities, to compare the relative merits of the approaches to automatic appropriate facial reaction generation under different spontaneous dyadic interaction conditions. This paper presents: (i) the novelties, contributions and guidelines of the REACT2023 challenge; (ii) the dataset utilized in the challenge; and (iii) the performance of the baseline systems on the two proposed sub-challenges: Offline Multiple Appropriate Facial Reaction Generation and Online Multiple Appropriate Facial Reaction Generation, respectively. The challenge baseline code is publicly available at https://github.com/reactmultimodalchallenge/baseline_react2023.

Description

Journal Title

Proceedings of the 31st ACM International Conference on Multimedia

Conference Name

Proceedings of the 31st ACM International Conference on Multimedia

Journal ISSN

Volume Title

Publisher

Association for Computing Machinery (ACM)

Rights and licensing

Except where otherwised noted, this item's license is described as Attribution 4.0 International
Sponsorship
Engineering and Physical Sciences Research Council (EP/R030782/1)
The Spanish project PID2019-105093GB-I00 and by ICREA under the ICREA Academia programme.