Automatic Prediction of Impressions in Time and across Varying Context: Personality, Attractiveness and Likeability
IEEE Transactions on Affective Computing
MetadataShow full item record
Celiktutan, O., & Gunes, H. (2017). Automatic Prediction of Impressions in Time and across Varying Context: Personality, Attractiveness and Likeability. IEEE Transactions on Affective Computing, 8 (1), 29-42. https://doi.org/10.1109/TAFFC.2015.2513401
In this paper, we propose a novel multimodal framework for automatically predicting the impressions of extroversion, agreeableness, conscientiousness, neuroticism, openness, attractiveness and likeability continuously in time and across varying situational contexts. Differently from the existing works, we obtain visual-only and audio-only annotations continuously in time for the same set of subjects, for the first time in the literature, and compare them to their audio-visual annotations.We propose a time-continuous prediction approach that learns the temporal relationships rather than treating each time instant separately. Our experiments show that the best prediction results are obtained when regression models are learned from audio-visual annotations and visual cues, and from audio-visual annotations and visual cues combined with audio cues at the decision level. Continuously generated annotations have the potential to provide insight into better understanding which impressions can be formed and predicted more dynamically, varying with situational context, and which ones appear to be more static and stable over time.
This research work was supported by the EPSRC MAPTRAITS Project (Grant Ref: EP/K017500/1) and the EPSRC HARPS Project under its IDEAS Factory Sandpits call on Digital Personhood (Grant Ref: EP/L00416X/1).
EPSRC (via University of Exeter) (EP/L00416X/1)
External DOI: https://doi.org/10.1109/TAFFC.2015.2513401
This record's URL: https://www.repository.cam.ac.uk/handle/1810/253735