Explanations in knowledge systems: Design for Explainable Expert Systems

William Swartout, Cécile Paris, Johanna Moore

Research output: Contribution to journalArticlepeer-review

119 Citations (Scopus)

Abstract

The Explainable Expert Systems framework (EES), in which the focus is on capturing those design aspects that are important for producing good explanations, including justifications of the system's actions, explications of general problem-solving strategies, and descriptions of the system's terminology, is discussed. EES was developed as part of the Strategic Computing Initiative of the US Dept. of Defense's Defense Advanced Research Projects Agency (DARPA). Both the general principles from which the system was derived and how the system was derived from those principles can be represented in EES. The Program Enhancement Advisor, which is the main prototype on which the explanation work has been developed and tested, is presented. PEA is an advice system that helps users improve their Common Lisp programs by recommending transformations that enhance the user's code. How EES produces better explanations is shown.

Original languageEnglish
Pages (from-to)58-64
Number of pages7
JournalIEEE Expert
Volume6
Issue number3
DOIs
Publication statusPublished - Jun 1991

Fingerprint

Dive into the research topics of 'Explanations in knowledge systems: Design for Explainable Expert Systems'. Together they form a unique fingerprint.

Cite this