Repository logo

Automatic Prediction of Impressions in Time and across Varying Context: Personality, Attractiveness and Likeability

Published version

Repository DOI



Change log


Celiktutan, O 


© 2010-2012 IEEE. In this paper, we propose a novel multimodal framework for automatically predicting the impressions of extroversion, agreeableness, conscientiousness, neuroticism , openness, attractiveness and likeability continuously in time and across varying situational contexts. Differently from the existing works, we obtain visual-only and audio-only annotations continuously in time for the same set of subjects, for the first time in the literature, and compare them to their audio-visual annotations. We propose a time-continuous prediction approach that learns the temporal relationships rather than treating each time instant separately. Our experiments show that the best prediction results are obtained when regression models are learned from audio-visual annotations and visual cues, and from audio-visual annotations and visual cues combined with audio cues at the decision level. Continuously generated annotations have the potential to provide insight into better understanding which impressions can be formed and predicted more dynamically, varying with situational context, and which ones appear to be more static and stable over time.



Interpersonal perception, personality, attractiveness, likeability, time-continuous prediction

Journal Title

IEEE Transactions on Affective Computing

Conference Name

Journal ISSN


Volume Title



Institute of Electrical and Electronics Engineers (IEEE)
Engineering and Physical Sciences Research Council (EP/L00416X/1)
Engineering and Physical Sciences Research Council (EP/K017500/1)
This research work was supported by the EPSRC MAPTRAITS Project (Grant Ref: EP/K017500/1) and the EPSRC HARPS Project under its IDEAS Factory Sandpits call on Digital Personhood (Grant Ref: EP/L00416X/1).