Show simple item record

dc.contributor.authorChuramani, Nikhil
dc.contributor.authorKalkan, Sinan
dc.contributor.authorGunes, Hatice
dc.date.accessioned2021-11-16T00:31:02Z
dc.date.available2021-11-16T00:31:02Z
dc.date.issued2021
dc.identifier.issn2326-5396
dc.identifier.urihttps://www.repository.cam.ac.uk/handle/1810/330668
dc.description.abstractMost state-of-the-art approaches for Facial Action Unit (AU) detection rely on evaluating static frames, encoding a snapshot of heightened facial activity. In real-world interactions, however, facial expressions are more subtle and evolve over time requiring AU detection models to learn spatial as well as temporal information. In this work, we focus on both spatial and spatio-temporal features encoding the temporal evolution of facial AU activation. We propose the Action Unit Lifecycle- Aware Capsule Network (AULA-Caps) for AU detection using both frame and sequence-level features. While, at the frame- level, the capsule layers of AULA-Caps learn spatial feature primitives to determine AU activations, at the sequence-level, it learns temporal dependencies between contiguous frames by focusing on relevant spatio-temporal segments in the sequence. The learnt feature capsules are routed together such that the model learns to selectively focus on spatial or spatio-temporal information depending upon the AU lifecycle. The proposed model is evaluated on popular benchmarks, namely BP4D and GFT datasets, obtaining state-of-the-art results for both.
dc.description.sponsorshipEPSRC grant EP/R513180/1 (ref. 2107412). EPSRC project ARoEQ under grant ref. EP/R030782/1. European Union’s Horizon 2020 research and innovation programme WorkingAge project under grant agreement No. 826232.
dc.publisherIEEE
dc.rightsAll rights reserved
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserved
dc.subjectAffective Computing
dc.subjectFacial Action Units
dc.subjectCapsule Networks
dc.subjectComputer Vision
dc.subjectMachine Learning
dc.subjectNeural Networks
dc.titleAULA-Caps: Lifecycle-Aware Capsule Networks for Spatio-Temporal Analysis of Facial Actions
dc.typeConference Object
prism.publicationName2021 16TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2021)
dc.identifier.doi10.17863/CAM.78113
dcterms.dateAccepted2021-10-29
rioxxterms.versionofrecord10.17863/CAM.78113
rioxxterms.versionAM
rioxxterms.licenseref.urihttp://www.rioxx.net/licenses/all-rights-reserved
rioxxterms.licenseref.startdate2021-10-29
dc.contributor.orcidChuramani, Nikhil [0000-0001-5926-0091]
dc.contributor.orcidGunes, Hatice [0000-0003-2407-3012]
rioxxterms.typeConference Paper/Proceeding/Abstract
pubs.funder-project-idEPSRC (2107412)
pubs.funder-project-idEngineering and Physical Sciences Research Council (EP/R030782/1)
pubs.funder-project-idEuropean Commission Horizon 2020 (H2020) Societal Challenges (826232)
pubs.conference-nameIEEE International Conference on Automatic Face and Gesture Recognition (FG) 2021
pubs.conference-start-date2021-12-15
cam.orpheus.successWed May 25 11:13:05 BST 2022 - Embargo updated*
cam.orpheus.counter7
pubs.conference-finish-date2021-12-18
rioxxterms.freetoread.startdate2022-12-31


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record