Abstract
Why-questions try to get to the bottom of the matter and ask for explanations. In this paper, we show how we can learn the complete structure of probabilistic rules for a question-answering system from example interpretations. These rules are then used by a meta-interpreter to find answers in the form of explanations for why-questions for a particular application domain.
Original language | English |
---|---|
Number of pages | 9 |
Publication status | Published - 2019 |
Event | 1st International Workshop on Interpretability: Methodologies and Algorithms (IMA 2019) part of AI 2019 and AusDM 2019 - Adelaide, Australia Duration: 2 Dec 2019 → 2 Dec 2019 |
Conference
Conference | 1st International Workshop on Interpretability: Methodologies and Algorithms (IMA 2019) part of AI 2019 and AusDM 2019 |
---|---|
Country/Territory | Australia |
City | Adelaide |
Period | 2/12/19 → 2/12/19 |
Keywords
- why-questions
- explainability
- probabilistic logic programming
- probabilistic rule learning