Old Strategies, New Environments: Reinforcement Learning on Social Media.
Accepted version
Peer-reviewed
Repository URI
Repository DOI
Change log
Authors
Abstract
The rise of social media has profoundly altered the social world, introducing new behaviors that can satisfy our social needs. However, it is not yet known whether human social strategies, which are well adapted to the offline world we developed in, operate as effectively within this new social environment. Here, we describe how the computational framework of reinforcement learning (RL) can help us to precisely frame this problem and diagnose where behavior-environment mismatches emerge. The RL framework describes a process by which an agent can learn to maximize their long-term reward. RL, which has proven to be successful in characterizing human social behavior, consists of 3 stages: updating expected reward, valuating expected reward by integrating subjective costs such as effort, and selecting an action. Specific social media affordances, such as the quantifiability of social feedback, may interact with the RL process at each of these stages. In some cases, affordances can exploit RL biases that are beneficial offline by violating the environmental conditions under which such biases are optimal, such as when algorithmic personalization of content interacts with confirmation bias. Characterizing the impact of specific aspects of social media through this lens can improve our understanding of how digital environments shape human behavior. Ultimately, this formal framework could help address pressing open questions about social media use, including its changing role across human development and its impact on outcomes such as mental health.
Description
Journal Title
Conference Name
Journal ISSN
1873-2402
Volume Title
Publisher
Publisher DOI
Rights and licensing
Sponsorship
MRC (MC_UU_00030/13)