A statutory right to explanation for decisions generated using artificial intelligence

Joshua Gacutan, Niloufer Selvadurai

Research output: Contribution to journalArticlepeer-review

10 Citations (Scopus)


As artificial intelligence technologies are increasingly deployed by government and commercial entitles to generate automated and semi-automated decisions, the right to an explanation for such decisions has become a critical legal issue. As the internal logic of machine learning algorithms is typically opaque, the absence of a right to explanation can weaken an individual’s ability to challenge such decisions. This article considers the merits of enacting a statutory right to explanation for automated decisions. To this end, this article begins by considering a theoretical justification for a right to explanation, examines consequentialist and deontological approaches to protection and considers the appropriate ambit of such a right, comparing absolute transparency with partial transparency and counterfactual explanations. This article then analyses insights provided by the European Union’s General Data Protection Regulation before concluding by recommending an option for reform to protect the legitimate interests of individuals affected by automated decisions.
Original languageEnglish
Pages (from-to)193-216
Number of pages24
JournalInternational Journal of Law and Information Technology
Issue number3
Publication statusPublished - 2020


  • Artificial intelligence regulation
  • Statutory Right to Explanation for Decisions Generated Using Artificial Intelligence"
  • Algorithmic bias
  • Machine learning governance
  • AI
  • artificial intelligence
  • algorithmic transparency
  • automated decision-making
  • privacy
  • right to explanation
  • General Data Protection Regulation


Dive into the research topics of 'A statutory right to explanation for decisions generated using artificial intelligence'. Together they form a unique fingerprint.

Cite this