SEAMS 2019
Sat 25 - Sun 26 May 2019 Montreal, QC, Canada
co-located with ICSE 2019
Sun 26 May 2019 14:15 - 14:30 at Duluth - AI & Adaptivity Chair(s): Hausi Müller

Machine learning tools, like deep neural networks, are perceived to be black boxes. That is, the only way of changing their internal data models is to change the inputs that are used to train these models. This is ineffective in dynamic systems that are prone to changes in their inputs, like concept drift. Alternatively, one could have an ensemble of machine learning tools that are able to handle automatically the dynamics of their inputs by swapping classifiers according to the needs. However, this solution has proven to be quite resource consuming. A new promising solution is transparent artificial intelligence based on the notions of interpretation and explanation. The research question is whether we can have a self-adaptive machine learning system that is able to interpret and explain the machine learning model in order to be able to control its internal data models. In this paper, we present our initial thoughts whether this can be achieve.

Sun 26 May

Displayed time zone: Eastern Time (US & Canada) change

14:00 - 15:30
AI & AdaptivitySEAMS 2019 at Duluth
Chair(s): Hausi Müller University of Victoria, Computer Science, Faculty of Engineering, Canada
Is Adaptivity a Core Property of Intelligent Systems? It DependsAI & Adaptivity
SEAMS 2019
AbdElRahman ElSaid , Travis Desell University of North Dakota, Daniel Krutz Rochester Institute of Technology
Self-adaptive AIAI & Adaptivity
SEAMS 2019
Rogério de Lemos University of Kent, UK, Marek Grzes University of Kent
Panel DiscussionAI & Adaptivity
SEAMS 2019
Hausi Müller University of Victoria, Computer Science, Faculty of Engineering, Canada