IAN: Interpretable attention network for churn prediction in LBSNs
Published version
Peer-reviewed
Repository URI
Repository DOI
Change log
Authors
Abstract
With the rise of Location-Based Social Networks (LBSNs) and their heavy reliance on User-Generated Content, it has become essential to attract and keep more users, which makes the churn prediction problem interesting. Recent research focuses on solving the task by utilizing complex neural networks. However, due to the black-box nature of those proposed deep learning algorithms, it is still a challenge for LBSN managers to interpret the prediction results and design strategies to prevent churning behavior. Therefore, in this paper, we perform the first investigation into the interpretability of the churn prediction in LBSNs. We proposed a novel attention-based deep learning network, Interpretable Attention Network (IAN), to achieve high performance while ensuring interpretability. The network is capable to process the complex temporal multivariate multidimensional user data from LBSN datasets (i.e. Yelp and Foursquare) and provides meaningful explanations of its prediction. We also utilize several visualization techniques to interpret the prediction results. By analyzing the attention output, researchers can intuitively gain insights into which features dominate the model's prediction of churning users. Finally, we expect our model to become a robust and powerful tool to help LBSN applications to understand and analyze user churning behavior and in turn remain users.