Research in Cognitive Science suggests that humans understand and represent knowledge of the world through causal relationships. In addition to observations, they can rely on experimenting and counterfactual reasoning – i.e. referring to an alternative course of events – to identify causal relations and explain atypical situations. Different instances of control systems, such as smart homes, would benefit from having a similar causal model, as it would help the user understand the logic of the system and better react when needed. However, while data-driven methods achieve high levels of correlation detection, they mainly fall short of finding causal relations, notably being limited to observations only. Notably, they struggle to identify the cause from the effect when detecting a correlation between two variables. This paper introduces a new way to learn causal models from a mixture of experiments on the environment and observational data. The core of our method is the use of selected interventions, especially our learning takes into account the variables where it is impossible to intervene, unlike other approaches. The causal model we obtain is then used to generate Causal Bayesian Networks, which can be later used to perform diagnostic and predictive inference. We use our method on a smart home simulation, a use case where knowing causal relations pave the way towards explainable systems. Our algorithm succeeds in generating a Causal Bayesian Network close to the simulation’s ground truth causal interactions, showing encouraging prospects for application in real-life systems.
Wed 29 SepDisplayed time zone: Eastern Time (US & Canada) change
13:00 - 14:20
|A Self-Adaptive Load Balancing Approach for Software-Defined Networks in IoT
|To do or not to do: finding causal relations in smart homes
Kanvaly Fadiga Ecole Polytechnique, Ada Diaconescu LTCI Lab, Telecom Paris, Institute Politechnqie de Paris, Jean-Louis Dessalles LTCI Lab, Telecom ParisTech, Université Paris-Saclay, Étienne Houzé Télécom ParisPre-print
|Self-organized Allocation of Dependent Tasks in Industrial Applications
|A Framework for Self-Explaining Systems in the Context of Intensive Care