Exploring the Fundamentals of Mutations in Deep Neural Networks
The increasing popularity of deep neural networks (DNNs) has led to the adaptation of mutation analysis from classical software development to the machine learning (ML) paradigm. However, determining what to mutate and to what extent remains a challenge. To this aim, two questions are of importance: (i) which ML artifacts can be modified to generate acceptable mutations?, and (ii) what extent of change in a specific metric qualifies as a useful mutant? Addressing the first query, current research offers contradictory perspectives on suitable ML artifacts for mutations. In this paper, we argue that the ML development process resembles formal method-based development, drawing parallels between iterative refinement in ML and in formal specification-based development. This framing supports the injection of bugs into the training program, training data, and trained models. Regarding the second inquiry, existing ML mutant selection criteria focus on semantic aspects like prediction accuracy and error rates, neglecting the magnitude of syntactic changes. This oversight challenges the validity of foundational hypotheses for mutation analysis in ML, such as the competent programmer hypothesis and coupling effect. Our observations support addressing these fundamental tasks to enhance the realism of mutation analysis in ML contexts. We have outlined plans to tackle these identified challenges.
Tue 24 SepDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
11:00 - 12:30 | |||
11:00 30mTalk | A Comparative Study of Large Language Models for Goal Model Extraction SAM Conference Vaishali Siddeshwar Ontario Tech University, Sanaa Alwidian University of Montreal, Masoud Makrehchi Ontario Tech University | ||
11:30 30mTalk | Exploring the Fundamentals of Mutations in Deep Neural Networks SAM Conference | ||
12:00 30mDay closing | Closing Ceremony SAM Conference |