In platforms we trust: misinformation on social networks in the presence of social mistrust
Preprint
Repository URI
Repository DOI
Change log
Authors
Abstract
We examine the effect social mistrust has on the propagation of misinformation on a social network. Agents communicate with each other and observe information sources, changing their opinion with some probability determined by their social trust, which can be low or high. Low social trust agents are less likely to be convinced out of their opinion by their peers and, in line with recent empirical literature, are more likely to observe misinformative information sources. A platform facilitates the creation of a homophilic network where users are more likely to connect with agents of the same level of social trust and the same social characteristics. Networks in which worldview is relatively important in determining network structure have more pronounced echo chambers, reducing the extent to which high and low social trust agents interact. Due to the asymmetric nature of these interactions, echo chambers then decrease the probability that agents believe misinformation. At the same time, they increase polarisation, as disagreeing agents interact less frequently, leading to a trade-off which has implications for the optimal intervention of a platform wishing to reduce misinformation. We characterise this intervention by delineating the most effective change in the platform's algorithm, which for peer-to-peer connections involves reducing the extent to which relatively isolated high and low social trust agents interact with one another.