The relationship between human values and the ethical design and acceptability of relational agents

Ravi Vythilingam, Deborah Richards, Paul Formosa

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

Abstract

Relational agents are artificially intelligent (AI) virtual characters that seek to support humans by playing roles typically played by humans. Research has focussed on the believability and utility of relational agents, but little research has been conducted to explore the ethical acceptability of using AI technology in these roles. We have created four scenarios (three text-based and one using a relational agent), each of which is designed to explore a relational agent in different relational roles and contexts that encompass the five AI4People ethical principles (beneficence, non-maleficence, autonomy, justice, and explicability), to capture participants’ agreement with the scenarios. To model whether participants’ responses are related to their values, we capture participants’ values using Schwartz’s Theory of Basic Human Values. The motivation and design of our study and preliminary results are presented in this paper.
Original languageEnglish
Title of host publicationProceedings of the 9th Conference of the Australian Institute of Computer Ethics (AiCE 2020)
Subtitle of host publicationComputer Ethics in the New Normal
PublisherAustralasian Institute of Computer Ethics (AiCE)
Pages1-10
Number of pages10
ISBN (Electronic)9780646831077
Publication statusPublished - 2020
EventConference of the Australasian Institute of Computer Ethics (9th : 2020) - Virtual, Australia
Duration: 28 Nov 202010 Dec 2020

Conference

ConferenceConference of the Australasian Institute of Computer Ethics (9th : 2020)
Abbreviated titleAiCE 2020
Country/TerritoryAustralia
Period28/11/2010/12/20

Cite this