Making moral machines

why we need artificial moral agents

Paul Formosa*, Malcolm Ryan

*Corresponding author for this work

Research output: Contribution to journalArticle

Abstract

As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive analysis of the relevant arguments for and against creating AMAs, and we argue that all things considered we have strong reasons to continue to responsibly develop AMAs. The key contributions of this paper are threefold. First, to provide the first comprehensive response to the important arguments made against AMAs by Wynsberghe and Robbins (in “Critiquing the Reasons for Making Artificial Moral Agents”, Science and Engineering Ethics 25, 2019) and to introduce several novel lines of argument in the process. Second, to collate and thematise for the first time the key arguments for and against AMAs in a single paper. Third, to recast the debate away from blanket arguments for or against AMAs in general, to a more nuanced discussion about the use of what sort of AMAs, in what sort of contexts, and for what sort of purposes is morally appropriate.

Original languageEnglish
Number of pages13
JournalAI and Society
DOIs
Publication statusE-pub ahead of print - 3 Nov 2020

Keywords

  • Artificial intelligence (AI)
  • Artificial moral agents (AMA)
  • Autonomous vehicle ethics
  • Machine ethics
  • Moral machines
  • Robot ethics

Fingerprint Dive into the research topics of 'Making moral machines: why we need artificial moral agents'. Together they form a unique fingerprint.

Cite this