AI & Adaptivity
Machine learning tools, like deep neural networks, are perceived to be black boxes. That is, the only way of changing their internal data models is to change the inputs that are used to train these models. This is ineffective in dynamic systems that are prone to changes in their inputs, like concept drift. Alternatively, one could have an ensemble of machine learning tools that are able to handle automatically the dynamics of their inputs by swapping classifiers according to the needs. However, this solution has proven to be quite resource consuming. A new promising solution is transparent artificial intelligence based on the notions of interpretation and explanation. The research question is whether we can have a self-adaptive machine learning system that is able to interpret and explain the machine learning model in order to be able to control its internal data models. In this paper, we present our initial thoughts whether this can be achieve.
Sun 26 MayDisplayed time zone: Eastern Time (US & Canada) change
14:00 - 15:30
|Is Adaptivity a Core Property of Intelligent Systems? It DependsAI & Adaptivity|
|Self-adaptive AIAI & Adaptivity|
|Panel DiscussionAI & Adaptivity|
Hausi Müller University of Victoria, Computer Science, Faculty of Engineering, Canada