ICTSS 2024
Wed 30 October - Fri 1 November 2024 London, United Kingdom

Challenge Track

The ICTSS Challenge Track aims to continue the spirit of the SSBSE Challenge Track 2024 and extend the idea to general testing and validation.

Similarly to SSBSE 2024, participants can use their expertise to carry out analyses on open source software projects or to directly improve the infrastructure powering research experiments. The principal criterion is to produce interesting results and to apply your expertise to challenge the state of the art and inspire future Validation and Testing research.

Call for Solutions

We are seeking research papers focused on resolving challenge cases using Testing and Validation techniques to produce relevant insights and interesting results. You may choose either of the following challenge cases as the subject of your research for the ICTSS 2024 Challenge Track!

Challenge Cases

Quantum Computing

Quantum Computers and Quantum programs are becoming rising technologies in the field of software engineering, where different authors have aimed to improve the quality of software and deal with the testing limitations that these technologies offer. In this open-ended challenge, Testing and Validation research should be carried out on quantum programs, quantum computers, or quantum simulators. Beyond this, there are no rules on what may be attempted. Here are some suggestions:

  • Testing and Validation of Quantum Programs: Conduct experiments using testing and validation strategies to explore the design space of quantum programs and find optimal computer circuits for specific behaviours.

  • Automated Testing and Validation: Develop or use techniques to automatically generate test cases or validation strategies that stress quantum programs, computers or simulators.

  • Automated Bug Fixing: Fix bugs in quantum computers, programs and simulators using testing and validation techniques.

  • Improving Software and Hardware: Use Testing and Validation techniques to modify both the simulator and the software it is intended to run.

Testing and Validation of Generative AI

This challenge focuses on Testing and Validation of Generative AI (GenAI). You are invited to explore these two paradigms, the potential synergies between them, and the ways they can enhance the software engineering domain. We welcome submissions that cover but are not limited to the following topics:

  • Applications of Generative AI (GenAI) in Testing and Validation: Potential research opportunities could include novel integration strategies for GenAI in the Testing and Validation processes, examining the effect of such applications on the quality and efficiency of software development tasks, or exploring how GenAI can assist in Testing and Validation problem-solving for software engineering tasks.

  • Testing and Validation techniques for improving GenAI’s efficiency in software engineering tasks: This could involve investigations into techniques to improve the performance of GenAI in software engineering contexts, optimising the prompts to enhance GenAI’s response, or tailoring GenAI’ responses to better suit specific software engineering tasks.

  • Evaluation and benchmarking of GenAI for Testing and Validation tasks, including LLM output evaluation: There’s a need for new studies to evaluate and benchmark the effectiveness of GenAI in Testing and Validation, including research into methodologies for objectively evaluating the output of these models. This could include the development of new metrics or the application of existing ones in innovative ways.

  • Testing and Validation techniques for their potential use in training and/or fine-tuning GenAI: This could involve research into how Testing and Validation techniques can be utilized in the training/fine-tuning process of GenAI, including the exploration of novel fine-tuning methods, or how these techniques can assist in discovering optimal processes for training/fine-tuning GenAI.

  • Testing and Validation techniques applied on tools created with/for GenAI: This research could explore how Testing and Validation techniques can be used to enhance the performance and usability of tools designed to work with or for GenAI. This might involve the evaluation and optimisation of specific tools.

  • Practical experiences, case studies, and industrial perspectives related to the use of GenAI in conjunction with Testing and Validation methods: The focus could be on empirical studies that document the practical use of GenAI and Testing and Validation in real-world software engineering. This could include case studies of specific projects or surveys of industry practices, potentially highlighting successful applications, limitations, and future opportunities.

Submitting to the Challenge Track

A challenge-track participant should:

  • Perform original Testing and Validation research using or enhancing the challenge programs and/or their artefacts.
  • Report the findings in a six-page paper using the regular symposium format. Note that these findings must not have been previously published in any peer-reviewed venue.
  • Submit the challenge-track report by the deadline.
  • Present the findings at ICTSS 2024 if the submission is accepted.

It is not mandatory for submissions to the ICTSS Challenge track to implement a new tool, technique, or algorithm. However, we do expect that applying your existing or new tools/techniques/algorithms to the challenge programs will lead to relevant insights and interesting results.

Acceptance Criteria

The criteria for paper acceptance are the following:

  • Application of a Testing and Validation technique to either analyse or enhance the challenge program and/or its accompanying artefacts or any interesting finding on the application of GenAI/Quantum for Testing and Validation problems (and vice versa).
  • Technical soundness.
  • Readability and presentation.

Submission details

Submissions must be, at most, six pages long in PDF format and should conform at the time of submission to the IFIP-ICTSS/Springer LNCS format and general submission guidelines provided by the “Format and submission” section of the Research Track.

Submissions must not have been previously published or be in consideration for any journal, book, or other conference. Please submit your challenge paper before the Challenge Solution deadline. At least one author of each paper is expected to register for ICTSS 2024. In-person presentations are desirable, but online presentations may be allowed subject to circumstances. Papers for the Challenge Solution track are also required to follow double-anonymous restrictions. All accepted contributions will be published in the conference proceedings.

Papers can be submitted through the general link on EasyChair (just select the Challenge Track).

This program is tentative and subject to change.

You're viewing the program in a time zone which is different from your device's time zone change time zone
No schedule or scheduled events are not visible yet, check back later