The generation of comprehensible explanations is an essential feature of modern artificial intelligence systems. In this work, we consider “probabilistic logic programming”, an extension of logic programming which can be useful to model domains with relational structure and uncertainty. Essentially, a program specifies a probability distribution over possible worlds (i.e., sets of atoms that are assumed to be true). The notion of “explanation” is typically associated to that of a world, so that one often look for the most probable world or determine the worlds where a given query is true. Unfortunately, such explanations exhibit no causal structure and, thus, the chain of inferences for a particular prediction (represented by a query) is not shown. In this paper, we propose a novel approach where explanations are represented as programs that are generated from a given query by a number of unfolding-like transformations. Here, both the causal structure and the link to the prediction are explicit. Furthermore, the generated explanations are parametric w.r.t. a specification of “visible” predicates, so that the user can decide the level of detail in the explanations and hide uninteresting details.
Tue 10 MayDisplayed time zone: Osaka, Sapporo, Tokyo change
21:00 - 21:50 | |||
21:00 25mTalk | Explanations as Programs in Probabilistic Logic Programming FLOPS 2022 German Vidal Universitat Politecnica de Valencia | ||
21:25 25mTalk | Program Logic for Higher-Order Probabilistic Programs in Isabelle/HOL FLOPS 2022 Michikazu Hirata Tokyo Institute of Technology, Yasuhiko Minamide Tokyo Institute of Technology, Tetsuya Sato Tokyo Institute of Technology |