Explaining Explanations in Probabilistic Logic Programming
The emergence of tools based on artificial intelligence has also led to the need of producing explanations which are understandable by a human being. In most approaches, the system is considered a \emph{black box}, making it difficult to generate appropriate explanations. In this work, though, we consider a setting where models are \emph{transparent}: probabilistic logic programming (PLP), a paradigm that combines logic programming for knowledge representation and probability to model uncertainty. However, given a query, the usual notion of \emph{explanation} is associated with a set of choices, one for each random variable of the model. Unfortunately, such a set does not explain \emph{why} the query is true and, in fact, it may contain choices that are actually irrelevant for the considered query. To improve this situation, we present in this paper an approach to explaining explanations which is based on defining a new query-driven inference mechanism for PLP where proofs are labeled with \emph{choice expressions}, a compact and easy to manipulate representation for sets of choices. The combination of proof trees and choice expressions allows us to produce comprehensible query justifications with a causal structure.
Wed 23 OctDisplayed time zone: Osaka, Sapporo, Tokyo change
16:00 - 17:00 | Probabilistic and Declarative ProgrammingResearch Papers at Yamauchi Hall Chair(s): Oleg Kiselyov Tohoku University | ||
16:00 30mTalk | Hybrid Verification of Declarative Programs with Arithmetic Non-Fail Conditions Research Papers Michael Hanus Kiel University | ||
16:30 30mTalk | Explaining Explanations in Probabilistic Logic Programming Research Papers German Vidal Universitat Politecnica de Valencia |