As artificial intelligence technologies are increasingly deployed by government and commercial entitles to generate automated and semi-automated decisions, the right to an explanation for such decisions has become a critical legal issue. As the internal logic of machine learning algorithms is typically opaque, the absence of a right to explanation can weaken an individual’s ability to challenge such decisions. This article considers the merits of enacting a statutory right to explanation for automated decisions. To this end, this article begins by considering a theoretical justification for a right to explanation, examines consequentialist and deontological approaches to protection and considers the appropriate ambit of such a right, comparing absolute transparency with partial transparency and counterfactual explanations. This article then analyses insights provided by the European Union’s General Data Protection Regulation before concluding by recommending an option for reform to protect the legitimate interests of individuals affected by automated decisions.
|Number of pages||24|
|Journal||International Journal of Law and Information Technology|
|Publication status||Published - 2020|
- Artificial intelligence regulation
- Statutory Right to Explanation for Decisions Generated Using Artificial Intelligence"
- Algorithmic bias
- Machine learning governance
- artificial intelligence
- algorithmic transparency
- automated decision-making
- right to explanation
- General Data Protection Regulation