The bias dilemma: the ethics of algorithmic bias in natural-language processing

Oisín Deery, Katherine Bailey

Research output: Contribution to journalArticlepeer-review

9 Downloads (Pure)

Abstract

Addressing biases in natural-language processing (NLP) systems presents an underappreciated ethical dilemma, which we think underlies recent debates about bias in NLP models. In brief, even if we could eliminate bias from language models or their outputs, we would thereby often withhold descriptively or ethically useful information, despite avoiding perpetuating or amplifying bias. Yet if we do not debias, we can perpetuate or amplify bias, even if we retain relevant descriptively or ethically useful information. Understanding this dilemma provides for a useful way of rethinking the ethics of algorithmic bias in NLP.
Original languageEnglish
Pages (from-to)1-28
Number of pages28
JournalFeminist Philosophy Quarterly
Volume8
Issue number3/4
DOIs
Publication statusPublished - 21 Dec 2022

Bibliographical note

Copyright the Author(s) 2022. Version archived for private and non-commercial use with the permission of the author/s and according to publisher conditions. For further rights please contact the publisher.

Keywords

  • artificial intelligence
  • algorithms
  • bias

Fingerprint

Dive into the research topics of 'The bias dilemma: the ethics of algorithmic bias in natural-language processing'. Together they form a unique fingerprint.

Cite this