Show simple item record

dc.contributor.authorMou, Wen
dc.contributor.authorGunes, Haticeen
dc.contributor.authorPatras, Ien
dc.date.accessioned2017-01-09T11:47:13Z
dc.date.available2017-01-09T11:47:13Z
dc.date.issued2016-10-19en
dc.identifier.urihttps://www.repository.cam.ac.uk/handle/1810/261776
dc.description.abstractAutomatic affect analysis and understanding has become a well established research area in the last two decades. Recent works have started moving from individual to group scenarios. However, little attention has been paid to comparing the affect expressed in individual and group settings. This paper presents a framework to investigate the differences in affect recognition models along arousal and valence dimensions in individual and group settings. We analyse how a model trained on data collected from an individual setting performs on test data collected from a group setting, and $\textit{vice versa}$. A third model combining data from both individual and group settings is also investigated. A set of experiments is conducted to predict the affiective states along both arousal and valence dimensions on two newly collected databases that contain sixteen participants watching affiective movie stimuli in individual and group settings, respectively. The experimental results show that (1) the affect model trained with group data performs better on individual test data than the model trained with individual data tested on group data, indicating that facial behaviours expressed in a group setting capture more variation than in an individual setting; and (2) the combined model does not show better performance than the affect model trained with a specific type of data (i.e., individual or group), but proves a good compromise. These results indicate that in settings where multiple affect models trained with different types of data are not available, using the affect model trained with group data is a viable solution.
dc.language.isoenen
dc.publisherAssociation for Computing Machinery
dc.titleAlone versus In-a-group: A Comparative Analysis of Facial Affect Recognitionen
dc.typeConference Object
prism.endingPage525
prism.publicationDate2016en
prism.publicationNameProceedings of the 2016 ACM Multimedia Conferenceen
prism.startingPage521
dc.identifier.doi10.17863/CAM.6991
dcterms.dateAccepted2016-06-30en
rioxxterms.versionofrecord10.1145/2964284.2967276en
rioxxterms.versionAMen
rioxxterms.licenseref.urihttp://www.rioxx.net/licenses/all-rights-reserveden
rioxxterms.licenseref.startdate2016-10-19en
dc.contributor.orcidGunes, Hatice [0000-0003-2407-3012]
rioxxterms.typeConference Paper/Proceeding/Abstracten
pubs.funder-project-idEPSRC (via University of Exeter) (EP/L00416X/1)
pubs.conference-nameACM Multimedia Conference 2016en
pubs.conference-start-date2016-10-15en


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record