ICST 2025
Mon 31 March - Fri 4 April 2025 Naples, Italy

Mutation analysis involves mutations of software artifacts that are then used to evaluate the quality of software verification tools and techniques. It is considered the premier technique for evaluating the fault revealing effectiveness of test suites, test generation techniques, and other testing approaches.

Ideas derived from mutation analysis have also been used to test artifacts at different levels of abstraction, including requirements, formal specifications, models, architectural design notations and even informal descriptions. Recently, mutation has played an important role in software engineering for AI, such as in verifying trained models and behaviours. Furthermore, researchers and practitioners have investigated diverse forms of mutation, such as training or test data mutation, in combination with metamorphic testing to evaluate model performance in machine learning and detecting adversarial examples.

To be the premier forum for practitioners and researchers to discuss recent advances in the area of mutation analysis and propose new research directions, Mutation 2025 will feature keynote and invited talks, and will invite submissions of full and short length research paper, full and short length industry papers, and ‘Hot Off the Press’ presentations.

Important Dates

  • Submission deadline: January, 17th, 2024
  • Notification of acceptance: February, 3rd, 2024
  • Camera-ready: March, 8th, 2025
  • Workshop date: April 1st, 2025
Plenary
Hide plenary sessions

This program is tentative and subject to change.

You're viewing the program in a time zone which is different from your device's time zone change time zone

Tue 1 Apr

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

08:00 - 09:00
RegistrationSocial at Building Hall
08:00
60m
Registration
Registration
Social

09:00 - 10:00
Keynote 1Mutation at Room C
09:00
60m
Keynote
Mutation Testing in the Trenches
Mutation
Gregory Gay Chalmers University of Technology and University of Gothenburg
10:00 - 10:30
Code CrittersMutation at Room C
10:00
30m
Talk
Code Critters Presentation
Mutation
Philipp Straubinger University of Passau
10:30 - 11:00
10:30
30m
Coffee break
Break
Social

11:00 - 12:30
Technical ProgramMutation at Room C
11:00
30m
Paper
Equivalent Mutants: Deductive Verification to the Rescue
Mutation
Serge Demeyer University of Antwerp and Flanders Make vzw, Reiner Hähnle Technical University of Darmstadt
11:30
30m
Paper
Exploring Robustness of Image Recognition Models on Hardware Accelerators
Mutation
Nikolaos Louloudakis University of Edinburgh, Perry Gibson University of Glasgow, José Cano University of Glasgow, Ajitha Rajan University of Edinburgh
12:00
30m
Paper
Semantic-Preserving Transformations as Mutation Operators: A Study on Their Effectiveness in Defect Detection
Mutation
Max Hort Simula Research Laboratory, Linas Vidziunas , Leon Moonen Simula Research Laboratory and BI Norwegian Business School
12:30 - 14:00
LunchSocial at Room A3
12:30
90m
Lunch
Lunch
Social

14:00 - 15:00
Keynote 2Mutation at Room C
14:00
60m
Keynote
Mutation Testing at Meta
Mutation
Mark Harman Meta Platforms, Inc. and UCL
15:00 - 15:30
Technical ProgramMutation at Room C
15:00
30m
Paper
Mutation Testing via Iterative Large Language Model-driven Scientific Debugging
Mutation
Philipp Straubinger University of Passau, Marvin Kreis University of Passau, Stephan Lukasczyk JetBrains Research, Gordon Fraser University of Passau
Pre-print
15:30 - 16:00
15:30
30m
Coffee break
Break
Social

16:00 - 16:30
Technical ProgramMutation at Room C
16:00
30m
Paper
Intent-Based Mutation Testing: From Naturally Written Programming Intents to Mutants
Mutation
Asma Hamidi University of Luxembourg, Ahmed Khanfir Mediterranean Institute of Technology, South Mediterranean University, Tunisia, Mike Papadakis University of Luxembourg
16:30 - 17:30
16:30
60m
Panel
Panel
Mutation
Lionel Briand University of Ottawa, Canada; Lero centre, University of Limerick, Ireland, Shin Yoo Korea Advanced Institute of Science and Technology, Gunel Jahangirova King's College London, Gregory Gay Chalmers University of Technology and University of Gothenburg, Mike Papadakis University of Luxembourg
17:30 - 17:40
ClosingMutation at Room C
17:30
10m
Day closing
Closing
Mutation

Hide past events

Call for Papers

Mutation analysis involves mutation of software artifacts that are then used to evaluate the quality of software verification tools and techniques. It is considered the premier technique for evaluating the fault revealing effectiveness of test suites, test generation techniques and other testing approaches. Ideas derived from mutation analysis have also been used to test artifacts at different levels of abstraction, including requirements, formal specifications, models, architectural design notations and even informal descriptions. Recently, mutation has played an important role in software engineering for AI, such as in verifying learned models and behaviors. Furthermore, researchers and practitioners have investigated diverse forms of mutation, such as training data or test data mutation, in combination with metamorphic testing to evaluate model performance in machine learning and detecting adversarial examples. Mutation 2024 aims to be the premier forum for practitioners and researchers to discuss recent advances in the area of mutation analysis and propose new research directions. We invite submissions of both full-length and short-length research papers and especially encourage the submission of industry practice papers.

Topics of Interest

Topics of interest include, but are not limited to, the following:

  • Evaluation of mutation-based test adequacy criteria, and comparative studies with other test adequacy criteria.
  • Formal theoretical analysis of mutation testing.
  • Empirical studies on any aspects of mutation testing.
  • Mutation based generation of program variants.
  • Higher-order mutation testing.
  • Mutation testing tools.
  • Mutation for mobile, internet, and cloud based systems (e.g., addressing QoS, power consumption, stress testing, performance, etc.).
  • Mutation for security and reliability.
  • Novel mutation testing applications, and mutation testing in novel domains.
  • Industrial experience with mutation testing.
  • Mutation for artificial intelligence (e.g., data mutation, model mutation, mutation-based test data generation, etc.)

Types of Submissions

Five types of papers can be submitted to the workshop:

  • Full papers (10 pages): Research, case studies.
  • Short papers (6 pages): Research in progress, tools.
  • Full industrial papers (6 pages): Applications and lessons learned in industry.
  • Short industrial papers (2 pages): Mutation testing in practice reports.
  • Hot Off the Press (1 page abstract): presentation of work recently published in other venues

Each paper must conform to the two columns IEEE conference publication format (please use the letter format template and conference option) and must be submitted in PDF format via EasyChair (select “The 20th International Workshop on Mutation Analysis”). Submissions will be evaluated according to the relevance and originality of the work and to their ability to generate discussions between the participants of the workshop. Each submission will be reviewed by three reviewers, and all accepted papers will be published as part of the ICST proceedings (except for Hot Off the Press submissions). Mutation 2024 will employ a double-anonymous review process (except for Hot Off the Press and industry submissions). Authors must make every effort to anonymize their papers to hide their identities throughout the review process. See the double-anonymous Q&A page for more information.

 Industry papers

Industry papers should be given the keyword “industry” and are not subject to the double anonymity policy.

Hot Off the Press

Hot Off the Press submissions should contain 1) a short summary of the paper’s contribution, 2) an explanation of why those results are particularly interesting for MUTATION attendees, 3) a link to the paper. Their title should start with “HOP:” The original paper should be published no earlier than January 1st 2022.

Important Dates

  • Submission deadline: January, 17th, 2024
  • Notification of acceptance: February, 3rd, 2024
  • Camera-ready: March, 8th, 2025
  • Workshop date: April 1st, 2025

Organization

Anthony Ventresque, Trinity College Dublin & Lero, Ireland Nargiz Humbatova, Università della Svizzera italiana, Switzerland

The concept of mutation testing was first proposed over 40 years ago. There exists theoretical evidence that the technique is superior to structural coverage criteria as a means of assessing and improving the quality of an existing test suite. However, due to factors such as computational cost and the lack of availability of mature tools for many programming languages, mutation technique has not caught on as a standard practice in industry. In recent years, there has been significant progress with regard to cost and tool availability, potentially making the adoption of mutation testing more realistic in practice. Rather, we now hypothesize that such adoption is hindered by a lack of guidance on how to effectively and efficiently utilize mutation testing in a development and testing workflow.

In this talk, I will discuss the results of a two year collaboration with Zenseact - a developer of autonomous driving and advanced driver assistance systems - to implement mutation testing in their continuous integration pipeline and to explore how mutation testing can be integrated into the testing process. This collaboration has illustrated the potential of mutation testing in industry, and yielded a number of findings related to the technical integration of mutation testing, how and when mutation testing should be applied, how the results of mutation testing should be presented, and how the results should be applied. However, many research challenges remain to be solved before mutation testing will be used in "standard" practice.

Gregory Gay is an Associate Professor in the Interaction Design and Software Engineering division in the Department of Computer Science and Engineering at Chalmers University of Technology and the University of Gothenburg. His research interests include software testing and analysis, AI and search-based automation of development tasks, and AI engineering - all with the aim of helping developers deliver complex systems in a safe, secure, and efficient manner. His research has been funded by agencies including the National Science Foundation (USA), the Swedish Research Council, and the Wallenberg AI, Autonomous Systems, and Software Program (WASP). In addition, he has an extensive history of industrial research collaboration, with partners including Ericsson, Zenseact, Volvo Cars, Rockwell Collins, and NASA, among others. His current service roles include deputy editor-in-chief of Automated Software Engineering Journal, the co-chair of the Software Technology research cluster in WASP, and the steering committee of the International Conference on Software Testing (ICST). In the past, he has also served as the program co-chair for ICST and the Symposium on Search-Based Software Engineering (SSBSE), on the SSBSE steering committee, and in a number of other roles in the software engineering research community.

This talk will cover Meta’s work on the Automated Compliance Hardening (ACH) tool, which uses mutation testing to guide Assured LLM-based Software Engineering. ACH generates relatively few mutants (aka simulated faults), compared to traditional mutation testing. Instead, it focuses on generating currently undetected faults that are specific to an issue of concern. From these currently uncaught faults, ACH generates tests that can catch them, thereby `killing' the mutants and consequently hardening the platform against regressions. ACH also deploys an LLM-based equivalent mutant detection agent that achieves a precision of 0.79 and a recall of 0.47 (rising to 0.95 and 0.96 with simple pre-processing). ACH was used by Messenger and WhatsApp test-a-thons where engineers accepted 73% of its tests, judging 36% to relevant. The talk will review Assured LLMSE, LLM-based test generation and mutation testing work at Meta.

Mark Harman is a Research Scientist at Meta London and a professor at University College London. He joined Meta following acquisition of his startup Majicke. He has published over 300 papers, with over 45,000 citations, and an H index of 105, making him the most highly cited scientist in the field of both Software Testing and of Program Analysis. His work has been deployed throughout Meta’s platforms for the past eight years, directly impacting over 3 billion people who rely on its product’s for social networking, community building and communication. His work has also directly impacted more than 200 million small companies that use Instagram, Facebook and WhatsApp to reach their customers and indirectly impacted many others that have deployed technology based on it, such as Microsoft, Google and Amazon. For his scientific work, Harman received the IEEE Harlan Mills Award and the ACM Outstanding Research Award in 2019. In 2020, he was elected a fellow of the Royal Academy of Engineering.

Future Directions for Mutation Testing

Panel moderator:

  • Anthony Ventresque

Panelists

  • Lionel Briand
  • Gregory Gay
  • Gunel Jahangirova
  • Mike Papadakis
  • Shin Yoo
:
: