Passive mobile sensing and psychological traits for large scale mood prediction
ACM International Conference Proceeding Series
MetadataShow full item record
Spathis, D., Servia-Rodriguez, S., Farrahi, K., Mascolo, C., & Rentfrow, J. (2019). Passive mobile sensing and psychological traits for large scale mood prediction. ACM International Conference Proceeding Series, 272-281. https://doi.org/10.1145/3329189.3329213
Experience sampling has long been the established method to sample people’s mood in order to assess their mental state. Smartphones have started to be used as experience sampling tools for mental health state as they accompany individuals during their day and can therefore gather in-the-moment data. However, the granularity of the data needs to be traded off with the level of interruption these tools introduce on users’ activities. As a consequence the data collected with this technique is often sparse. This has been obviated by the use of passive sensing in addition to mood reports, however this adds additional noise. In this paper we show that psychological traits collected through one-off questionnaires combined with passively collected sensing data (movement from the accelerometer and noise levels from the microphone) can be used to detect individuals whose general mood deviates from the common relaxed characteristic of the general population. By using the reported mood as a classification target we show how to design models that depend only on passive sensors and one-off questionnaires, without bothering users with tedious experience sampling. We validate our approach by using a large dataset of mood reports and passive sensing data collected in the wild with tens of thousands of participants, finding that the combination of these modalities has the best classification performance, and that passive sensing yields a +5% boost in accuracy. We also show that sensor data collected for the duration of a week performs better than when only using data collected for single days for this task. We discuss feature extraction techniques and appropriate classifiers for this kind of multimodal data, as well as overfitting shortcomings of using deep learning to handle static and dynamic features. We believe these findings have significant implications for mobile health applications that can benefit from the correct modeling of passive sensing along with extra user metadata.
This work was partially funded by the Embiricos Trust Scholarship of Jesus College, Cambridge and the EPSRC Doctoral Training Partnership (grant reference EP/N509620/1).
UNI OF SOUTHAMPTON (FB EPSRC) (EP/I032673/1)
External DOI: https://doi.org/10.1145/3329189.3329213
This record's URL: https://www.repository.cam.ac.uk/handle/1810/291528
All rights reserved