Certified Logic-Based Explainable AI -- The Case of Monotonic Classifiers
The continued advances in artificial intelligence (AI), including those in machine learning (ML), raise concerns regarding their deployment in high-risk and safety-critical domains. Motivated by these concerns, there have been calls for the verification of systems of AI, including their explanation. Nevertheless, tools for the verification of systems of AI are complex, and so error-prone. This paper describes one initial effort towards the certification of logic-based explainability algorithms, focusing on monotonic classifiers. Concretely, the paper starts by using the proof assistant Coq to prove the correctness of recently proposed algorithms for explaining monotonic classifiers. Then, the paper proves that the algorithms devised for monotonic classifiers can be applied to the larger family of stable classifiers. Finally, confidence code, extracted from the proofs of correctness, is used for computing explanations that are guaranteed to be correct. The experimental results included in the paper show the scalability of the proposed approach for certifying explanations.
Slides (slides.pdf) | 303KiB |
Tue 18 JulDisplayed time zone: London change
15:30 - 16:30 | TAP Session 3: Formal ModelsResearch Papers at Oak Chair(s): Catherine Dubois ENSIIE Paris-Evry Remote Participants: Zoom Link | ||
15:30 30mTalk | Certified Logic-Based Explainable AI -- The Case of Monotonic Classifiers Research Papers DOI File Attached | ||
16:00 30mTalk | Context Specification Language for Formal Verification of Consent Properties on Models and Code Research Papers P: Myriam Clouet Université Paris-Saclay, CEA, List, Thibaud Antignac CNIL (Commission nationale de l’informatique et des libertés), Mathilde Arnaud Université Paris-Saclay, CEA, List, Julien Signoles Université Paris-Saclay, CEA, List DOI File Attached |