Abstract
Relational agents are artificially intelligent (AI) virtual characters that seek to support humans by playing roles typically played by humans. Research has focussed on the believability and utility of relational agents, but little research has been conducted to explore the ethical acceptability of using AI technology in these roles. We have created four scenarios (three text-based and one using a relational agent), each of which is designed to explore a relational agent in different relational roles and contexts that encompass the five AI4People ethical principles (beneficence, non-maleficence, autonomy, justice, and explicability), to capture participants’ agreement with the scenarios. To model whether participants’ responses are related to their values, we capture participants’ values using Schwartz’s Theory of Basic Human Values. The motivation and design of our study and preliminary results are presented in this paper.
Original language | English |
---|---|
Title of host publication | Proceedings of the 9th Conference of the Australian Institute of Computer Ethics (AiCE 2020) |
Subtitle of host publication | Computer Ethics in the New Normal |
Publisher | Australasian Institute of Computer Ethics (AiCE) |
Pages | 1-10 |
Number of pages | 10 |
ISBN (Electronic) | 9780646831077 |
Publication status | Published - 2020 |
Event | Conference of the Australasian Institute of Computer Ethics (9th : 2020) - Virtual, Australia Duration: 28 Nov 2020 → 10 Dec 2020 |
Conference
Conference | Conference of the Australasian Institute of Computer Ethics (9th : 2020) |
---|---|
Abbreviated title | AiCE 2020 |
Country/Territory | Australia |
Period | 28/11/20 → 10/12/20 |