Repository logo
 

Word vector embeddings hold social ontological relations capable of reflecting meaningful fairness assessments

Published version
Peer-reviewed

Change log

Authors

Abstract

Programming artificial intelligence (AI) to make fairness assessments of texts through top-down rules, bottom-up training, or hybrid approaches, has presented the challenge of defining cross-cultural fairness. In this paper a simple method is presented which uses vectors to discover if a verb is unfair (e.g., slur, insult) or fair (e.g., thank, appreciate). It uses already existing relational social ontologies inherent in Word Embeddings and thus requires no training. The plausibility of the approach rests on two premises. That individuals consider fair acts those that they would be willing to accept if done to themselves. Secondly, that such a construal is ontologically reflected in Word Embeddings, by virtue of their ability to reflect the dimensions of such a perception. These dimensions being: responsibility vs. irresponsibility, gain vs. loss, reward vs. sanction, joy vs. pain, all as a single vector (FairVec). The paper finds it possible to quantify and qualify a verb as fair or unfair by calculating the cosine similarity of the said verb’s embedding vector against FairVec - which represents the above dimensions. We apply this to Glove and Word2Vec embeddings. Testing on a list of verbs produces an F1 score of 95.7, which is improved to 97.0. Lastly, a demonstration of the method’s applicability to sentence measurement is carried out.

Description

Keywords

Fairness, Meaning, Morality, Legal Philosophy, Responsibility, Policy

Journal Title

AI and Society: the journal of human-centered systems and machine intelligence

Conference Name

Journal ISSN

0951-5666
1435-5655

Volume Title

Publisher

Springer
Sponsorship
This research was funded by the European Union’s Horizon 2020 research and innovation programme under the Next Generation Internet TRUST grant agreement no. 825618.