SEAMS 2019
Sat 25 - Sun 26 May 2019 Montreal, QC, Canada
co-located with ICSE 2019
Sun 26 May 2019 14:15 - 14:30 at Duluth - AI & Adaptivity Chair(s): Hausi Müller

Machine learning tools, like deep neural networks, are perceived to be black boxes. That is, the only way of changing their internal data models is to change the inputs that are used to train these models. This is ineffective in dynamic systems that are prone to changes in their inputs, like concept drift. Alternatively, one could have an ensemble of machine learning tools that are able to handle automatically the dynamics of their inputs by swapping classifiers according to the needs. However, this solution has proven to be quite resource consuming. A new promising solution is transparent artificial intelligence based on the notions of interpretation and explanation. The research question is whether we can have a self-adaptive machine learning system that is able to interpret and explain the machine learning model in order to be able to control its internal data models. In this paper, we present our initial thoughts whether this can be achieve.

Sun 26 May

seams-2019-papers
14:00 - 15:30: SEAMS 2019 - AI & Adaptivity at Duluth
Chair(s): Hausi MüllerUniversity of Victoria, Computer Science, Faculty of Engineering, Canada
seams-2019-papers14:00 - 14:15
Talk
AbdElRahman ElSaid, Travis Desell University of North Dakota, Daniel KrutzRochester Institute of Technology
seams-2019-papers14:15 - 14:30
Talk
Rogério de LemosUniversity of Kent, UK, Marek GrzesUniversity of Kent
seams-2019-papers14:30 - 15:30
Hausi MüllerUniversity of Victoria, Computer Science, Faculty of Engineering, Canada