Repository logo

'Hypernudging': a threat to our moral autonomy?

Accepted version

Change log


Richards, Isabel 


It is well-recognised that cognitive irrationalities can be exploited to influence behaviour. ‘Hypernudging’ was coined by Karen Yeung to describe a powerful version of this phenomenon seen in digital systems that use large quantities of user data and machine learning to guide decision-making in highly personalised ways. Authors have worried about the societal impacts of the use of these capabilities at scale in commercial systems but have only begun to articulate them concretely. In this paper I look to elucidate one concern of this sort by focusing specifically on the employment of these techniques within social media and considering how it threatens our autonomy in forming moral judgments. By moral judgments I mean our judgments of someone’s actions or character as good versus bad. A threat to our autonomy in forming these is of real concern because moral judgments and their associated beliefs provide a critical backdrop for what is deemed acceptable in society, both individually and collectively and therefore what futures are possible and probable.

In the first two sections I introduce a psychological model that describes how humans reach moral judgments and the conditions under which it can and cannot be considered autonomous. In the third section I describe how hypernudging within a social media context creates the relevant problematic conditions so as to constitute a threat to our autonomy in forming moral judgments. In the fourth section I explore some practical measures that could be taken to protect moral autonomy. I conclude with some indicative evidence that this threat is not experienced uniformly across all societies, pointing to interesting future areas of research.



autonomy, Hypernudging, identity-protective cognition, moral judgment, social media

Journal Title

AI and Ethics

Conference Name

Journal ISSN


Volume Title



Publisher DOI

Publisher URL