Neurosurgery, explainable AI, and legal liability

Rita Matulionyte*, Eric Suero Molina, Antonio Di Ieva

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

One of the challenges of AI technologies is its “black box” nature, or the lack of explainability and interpretability of these technologies. This chapter explores whether AI systems in healthcare generally, and in neurosurgery specifically, should be explainable, for what purposes, and whether the current XAI (“explainable AI”) approaches and techniques are able to achieve these purposes. The chapter concludes that XAI techniques, at least currently, are not the only and not necessarily the best way to achieve trust in AI and ensure patient autonomy or improved clinical decision, and they are of limited significance in determining liability. Instead, we argue, we need more transparency around AI systems, their training and validation, as this information is likely to better achieve these goals.

Original languageEnglish
Title of host publicationComputational neurosurgery
EditorsAntonio Di Ieva, Eric Suero Molina, Sidong Liu, Carlo Russo
Place of PublicationSwitzerland
PublisherSpringer
Chapter34
Pages543-553
Number of pages11
ISBN (Electronic)9783031648922
ISBN (Print)9783031648915, 9783031648946
DOIs
Publication statusPublished - 2024

Publication series

NameAdvances in Experimental Medicine and Biology
PublisherSpringer
Volume1462
ISSN (Print)0065-2598
ISSN (Electronic)2214-8019

Keywords

  • AI
  • Explainability
  • Law
  • Neurosurgery

Fingerprint

Dive into the research topics of 'Neurosurgery, explainable AI, and legal liability'. Together they form a unique fingerprint.

Cite this