Self-adaptive AIAI & Adaptivity
Machine learning tools, like deep neural networks, are perceived to be black boxes. That is, the only way of changing their internal data models is to change the inputs that are used to train these models. This is ineffective in dynamic systems that are prone to changes in their inputs, like concept drift. Alternatively, one could have an ensemble of machine learning tools that are able to handle automatically the dynamics of their inputs by swapping classifiers according to the needs. However, this solution has proven to be quite resource consuming. A new promising solution is transparent artificial intelligence based on the notions of interpretation and explanation. The research question is whether we can have a self-adaptive machine learning system that is able to interpret and explain the machine learning model in order to be able to control its internal data models. In this paper, we present our initial thoughts whether this can be achieve.
Sun 26 MayDisplayed time zone: Eastern Time (US & Canada) change
14:00 - 15:30 | AI & AdaptivitySEAMS 2019 at Duluth Chair(s): Hausi Müller University of Victoria, Computer Science, Faculty of Engineering, Canada | ||
14:00 15mTalk | Is Adaptivity a Core Property of Intelligent Systems? It DependsAI & Adaptivity SEAMS 2019 AbdElRahman ElSaid , Travis Desell University of North Dakota, Daniel Krutz Rochester Institute of Technology | ||
14:15 15mTalk | Self-adaptive AIAI & Adaptivity SEAMS 2019 | ||
14:30 60m | Panel DiscussionAI & Adaptivity SEAMS 2019 Hausi Müller University of Victoria, Computer Science, Faculty of Engineering, Canada |