Designing explainable artificial intelligence with active inference: a framework for transparent introspection and decision-making

Mahault Albarracin*, Inês Hipólito, Safae Essafi Tremblay, Jason G. Fox, Gabriel René, Karl Friston, Maxwell J. D. Ramstead

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

5 Citations (Scopus)

Abstract

This paper investigates the prospect of developing human-interpretable, explainable artificial intelligence (AI) systems based on active inference and the free energy principle. We first provide a brief overview of active inference, and in particular, of how it applies to the modeling of decision-making, introspection, as well as the generation of overt and covert actions. We then discuss how active inference can be leveraged to design explainable AI systems, namely, by allowing us to model core features of “introspective” processes and by generating useful, human-interpretable models of the processes involved in decision-making. We propose an architecture for explainable AI systems using active inference. This architecture foregrounds the role of an explicit hierarchical generative model, the operation of which enables the AI system to track and explain the factors that contribute to its own decisions, and whose structure is designed to be interpretable and auditable by human users. We outline how this architecture can integrate diverse sources of information to make informed decisions in an auditable manner, mimicking or reproducing aspects of human-like consciousness and introspection. Finally, we discuss the implications of our findings for future research in AI, and the potential ethical considerations of developing AI systems with (the appearance of) introspective capabilities.

Original languageEnglish
Title of host publicationActive inference
Subtitle of host publication4th International Workshop, IWAI 2023: revised selected papers
EditorsChristopher L. Buckley, Daniela Cialfi, Pablo Lanillos, Maxwell Ramstead, Noor Sajid, Hideaki Shimazaki, Tim Verbelen, Martijn Wisse
Place of PublicationCham, Switzerland
PublisherSpringer, Springer Nature
Pages123-144
Number of pages22
ISBN (Electronic)9783031479588
ISBN (Print)9783031479571
DOIs
Publication statusPublished - 2024
EventInternational Workshop on Active Inference (4th : 2023) - Ghent, Belgium
Duration: 13 Sept 202315 Sept 2023

Publication series

NameCommunications in Computer and Information Science
Volume1915 CCIS
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937

Conference

ConferenceInternational Workshop on Active Inference (4th : 2023)
Abbreviated titleIWAI 2023
Country/TerritoryBelgium
CityGhent
Period13/09/2315/09/23

Keywords

  • Active Inference
  • Artificial intelligence
  • Explainability

Fingerprint

Dive into the research topics of 'Designing explainable artificial intelligence with active inference: a framework for transparent introspection and decision-making'. Together they form a unique fingerprint.

Cite this