This project aims to develop philosophical and scientifically informed criteria for deciding whether artificial agents can be responsible for their behaviour. The project’s significance lies in the fact that artificial agents are becoming increasingly prevalent in contemporary society but raise moral problems, which the project aims to address. Expected outcomes include influencing how artificially intelligent agents (especially moral ones) are built, and addressing questions about who is legally liable or responsible for the harms that may be caused by such systems. The anticipated benefit is a comprehensive account of agency that can guide development of artificial agents and inform our dealings with such agents in society and in the law.
|Effective start/end date||1/07/20 → 30/06/23|