Mutation analysis involves mutation of software artifacts that are then used to evaluate the quality of software verification tools and techniques. It is considered the premier technique for evaluating the fault revealing effectiveness of test suites, test generation techniques and other testing approaches. Ideas derived from mutation analysis have also been used to test artifacts at different levels of abstraction, including requirements, formal specifications, models, architectural design notations and even informal descriptions. Recently, mutation has played an important role in software engineering for AI, such as in verifying learned models and behaviors. Furthermore, researchers and practitioners have investigated diverse forms of mutation, such as training data or test data mutation, in combination with metamorphic testing to evaluate model performance in machine learning and detecting adversarial examples.
Mutation 2023 aims to be the premier forum for practitioners and researchers to discuss recent advances in the area of mutation analysis and propose new research directions. We invite submissions of both full-length and short-length research papers and especially encourage the submission of industry practice papers.
Tweets by MutationWSSun 16 AprDisplayed time zone: Dublin change
09:00 - 10:30 | |||
09:30 5mDay opening | Mutation Opening Mutation | ||
09:35 55mKeynote | Keynote: Mutation Testing of Deep Learning Systems: from Real Faults Investigation to Practical Applications Mutation Gunel Jahangirova King's College London |
11:00 - 12:30 | |||
11:00 30mTalk | Analysis of mutation operators for FSM testing Mutation | ||
11:30 30mTalk | A Tool for Mutation Analysis in Racket Mutation Bambi Zhuang Northeastern University, James Perretta Northeastern University, Arjun Guha Northeastern University and Roblox Research, Jonathan Bell Northeastern University | ||
12:00 30mTalk | The Inversive Relationship Between Bugs and Patches: An Empirical Study Mutation Pre-print |
14:00 - 15:30 | |||
14:00 30mTalk | Mutation Testing in Continuous Integration: An Exploratory Industrial Case Study Mutation Jonatan Örgård Chalmers | University of Gothenburg, Gregory Gay Chalmers | University of Gothenburg, Francisco Gomes de Oliveira Neto Chalmers University of Technology, Sweden / University of Gothenburg, Sweden, Kim Viggedal Zenseact | ||
14:30 30mTalk | Validation of Mutation Testing in the Safety Critical Industry through a Pilot Study Mutation Sten Vercammen University of Antwerp, Belgium, Markus Borg CodeScene, Serge Demeyer University of Antwerp; Flanders Make | ||
15:00 30mAwards | Best paper award and closing Mutation |
Accepted Papers
Call for Papers
NOTICE (24 Jan 2023): To give a bit of extra time to authors, we decided to have a soft deadline and allow the authors to update their submissions by Jan 31 only if they made initial submissions before the original deadline (Jan 27). Please, remember to register your paper and submit the title and abstract before Jan 27.
Mutation analysis involves mutation of software artifacts that are then used to evaluate the quality of software verification tools and techniques. It is considered the premier technique for evaluating the fault revealing effectiveness of test suites, test generation techniques and other testing approaches. Ideas derived from mutation analysis have also been used to test artifacts at different levels of abstraction, including requirements, formal specifications, models, architectural design notations and even informal descriptions. Recently, mutation has played an important role in software engineering for AI, such as in verifying learned models and behaviors. Furthermore, researchers and practitioners have investigated diverse forms of mutation, such as training data or test data mutation, in combination with metamorphic testing to evaluate model performance in machine learning and detecting adversarial examples.
Mutation 2023 aims to be the premier forum for practitioners and researchers to discuss recent advances in the area of mutation analysis and propose new research directions. We invite submissions of both full-length and short-length research papers and especially encourage the submission of industry practice papers.
Topics of Interest
Topics of interest include, but are not limited to, the following:
- Evaluation of mutation-based test adequacy criteria, and comparative studies with other test adequacy criteria.
- Formal theoretical analysis of mutation testing.
- Empirical studies on any aspects of mutation testing.
- Mutation based generation of program variants.
- Higher-order mutation testing.
- Mutation testing tools.
- Mutation for mobile, internet, and cloud based systems (e.g., addressing QoS, power consumption, stress testing, performance, etc.).
- Mutation for security and reliability.
- Novel mutation testing applications, and mutation testing in novel domains.
- Industrial experience with mutation testing.
- Mutation for artificial intelligence (e.g., data mutation, model mutation, mutation-based test data generation, etc.)
Types of Submissions
Three types of papers can be submitted to the workshop:
- Full papers (10 pages): Research, case studies.
- Short papers (6 pages): Research in progress, tools.
- Industrial papers (6 pages): Applications and lessons learned in industry.
Each paper must conform to the two columns IEEE conference publication format (please use the letter format template and conference option) and must be submitted in PDF format via EasyChair. Submissions will be evaluated according to the relevance and originality of the work and to their ability to generate discussions between the participants of the workshop. Each submission will be reviewed by three reviewers, and all accepted papers will be published as part of the ICST proceedings.
Mutation 2023 will employ a double-anonymous review process. Authors must make every effort to anonymize their papers to hide their identities throughout the review process. See the double-anonymous QnA page for more information.
Special Issue
Selected papers will be invited to submit extended versions of their manuscripts for a special issue on Mutation Analysis in the Journal of Software: Testing, Verification and Reliability (STVR).
Important Dates
- Submission deadline: Fri 27 Jan 2023
- Notification of acceptance: Fri 17 Feb 2023
- Camera-ready: Thu 2 Mar 2023
- Workshop date: Sun 16 Apr 2023
Organization
- Renzo Degiovanni, University of Luxembourg, Luxembourg
- Donghwan Shin, University of Sheffield, UK
Keynote
Mutation Testing of Deep Learning Systems: from Real Faults Investigation to Practical Applications
by Gunel Jahangirova
Abstract
Deep Learning (DL) is increasingly adopted to solve complex tasks such as image recognition or autonomous driving. Companies are considering the inclusion of DL components in production systems, but one of their main concerns is how to assess the quality of such systems. Mutation testing is a technique to inject artificial faults into a system, under the assumption that the capability to expose (kill) such artificial faults translate into the capability to expose also real faults.
In this talk, I will provide an overview of our work in adapting the idea behind mutation testing to DL systems. First, I will cover our investigation of what is a fault in a DL system. Then, I will introduce DeepCrime which is a mutation testing tool for DL systems based on the set of real faults that we have collected. Lastly, I will go through the implemented applications of mutation testing to the various software testing tasks such as test input generation, fault localisation, and test oracle generation in the domain of autonomous systems.
Bio
Gunel Jahangirova is a Lecturer (Assistant Professor) at King’s College London in the United Kingdom. Prof. Jahangirova current research interests include automatic generation of assertion oracles, error propagation in software programs, mutation testing of deep learning systems, and oracles and quality metrics for autonomous vehicles. Before joining KCL, Prof. Jahangirova worked as a PostDoctoral Researcher at the Software Institute of Università della Svizzera Italiana (USI) in Lugano, Switzerland. Prof. Jahangirova obtained her PhD in a joint program between Fondazione Bruno Kessler, Trento, Italy and University College London, London, UK during which she was advised by Prof. Paolo Tonella, Dr. David Clark (first supervisors) and Prof. Mark Harman (second supervisor). Prof. Jahangirova PhD work focused on the oracle problem in software testing, in particular, assessment, improvement and placement of test oracles.
When
Sunday 16th April at 9:30.