The SSBSE Challenge Track is an exciting opportunity for SBSE researchers to apply tools, techniques, and algorithms to real-world software. Participants can use their expertise to carry out analyses on open source software projects or to directly improve the infrastructure powering research experiments. The principal criterion is to produce interesting results and to apply your expertise to challenge the state of the art and inspire future SBSE research.
Accepted Papers
Call for Solutions
We are excited to announce the Challenge Track for SSBSE 2024, seeking research papers focused on resolving challenge cases using SBSE techniques to produce relevant insights and interesting results. You may choose either of the following challenge cases as the subject of your research for the SSBSE 2024 Challenge Track!
Cash Prize
All accepted submissions will compete for a cash prize of USD 500. The winning paper will be selected by reviewer feedback, and will be announced in the final section of SSBSE 2024.
Challenge Cases
Quantum Computing
Quantum Computers and Quantum programs are becoming rising technologies in the field of software engineering, where different authors have aimed to improve the quality of software and deal with the testing limitations that these technologies offer. In this open-ended challenge, SBSE research should be carried out on quantum programs, quantum computers, or quantum simulators. Beyond this, there are no rules on what may be attempted. Here are some suggestions:
-
Search-Based Exploration of Quantum Programs: Conduct experiments using search algorithms to explore the design space of quantum programs and find optimal computer circuits for specific behaviours.
-
Automated Testing: Use search-based techniques to automatically generate test cases that stress-test quantum programs, computers or simulators.
-
Automated Bug Fixing: Fix bugs in quantum computers, programs and simulators using SBSE techniques.
-
Co-optimization of Software and Hardware: Use SBSE-based co-optimization techniques to modify both the simulator and the software it is intended to run.
Generative AI and SBSE
This challenge focuses on SBSE and Generative AI (GenAI). You are invited to explore these two paradigms, the potential synergies between them, and the ways they can enhance the software engineering domain. We welcome submissions that cover but are not limited to the following topics:
-
Applications of Generative AI (GenAI) in Search-Based Software Engineering (SBSE): Potential research opportunities could include novel integration strategies for GenAI in the SBSE processes, examining the effect of such applications on the quality and efficiency of software development tasks, or exploring how GenAI can assist in search-based problem-solving for software engineering tasks.
-
Search-based optimisation techniques for improving GenAI’s efficiency in software engineering tasks: This could involve investigations into techniques to improve the performance of GenAI in software engineering contexts, optimising the prompts to enhance GenAI’s response, or tailoring GenAI’ responses to better suit specific software engineering tasks.
-
Evaluation and benchmarking of GenAI for SBSE tasks, including LLM output evaluation: There’s a need for new studies to evaluate and benchmark the effectiveness of GenAI in SBSE, including research into methodologies for objectively evaluating the output of these models. This could include the development of new metrics or the application of existing ones in innovative ways.
-
Search-based techniques for the potential use of SBSE in training and/or fine-tuning GenAI: This could involve research into how search-based techniques can be utilized in the training/fine-tuning process of GenAI, including the exploration of novel fine-tuning methods, or how these techniques can assist in discovering optimal processes for training/fine-tuning GenAI.
-
Search-based optimisation techniques applied on tools created with/for GenAI: This research could explore how search-based optimisation techniques can be used to enhance the performance and usability of tools designed to work with or for GenAI. This might involve the evaluation and optimisation of specific tools.
-
Practical experiences, case studies, and industrial perspectives related to the use of GenAI in conjunction with SBSE: The focus could be on empirical studies that document the practical use of GenAI and SBSE in real-world software engineering. This could include case studies of specific projects or surveys of industry practices, potentially highlighting successful applications, limitations, and future opportunities.
Submitting to the Challenge Track
A challenge-track participant should:
- Perform original SBSE research using or enhancing the challenge programs and/or their artefacts.
- Report the findings in a six-page paper using the regular symposium format. Note that these findings must not have been previously published in any peer-reviewed venue.
- Submit the challenge-track report by the deadline.
- Present the findings at SSBSE 2024 if the submission is accepted.
It is not mandatory for submissions to the SSBSE Challenge track to implement a new tool, technique, or algorithm. However, we do expect that applying your existing or new tools/techniques/algorithms to the challenge programs will lead to relevant insights and interesting results.
Acceptance Criteria
The criteria for paper acceptance are the following:
- Application of an SBSE technique to either analyse or enhance the challenge program and/or its accompanying artefacts or any interesting finding on the application of GenAI/Quantum for SBSE problems (and vice versa).
- Technical soundness.
- Readability and presentation.
Submission details
Submissions must be, at most, six pages long in PDF format and should conform at the time of submission to the SSBSE/Springer LNCS format and general submission guidelines provided by the “Format and submission” section of the Research Track.
Submissions must not have been previously published or be in consideration for any journal, book, or other conference. Please submit your challenge paper before the Challenge Solution deadline. At least one author of each paper is expected to register for SSBSE 2024. In-person presentations are desirable, but online presentations may be allowed subject to circumstances. Papers for the Challenge Solution track are also required to follow double-anonymous restrictions. All accepted contributions will be published in the conference proceedings.
Papers can be submitted through the following link on EasyChair.
https://easychair.org/conferences/?conf=ssbsec24
Submissions can be made via Easychair (TBC) by the submission deadline.
Challenge Track Collaborative Jam Sessions
The SSBSE Challenge track is a good opportunity for new researchers to join the SBSE community, develop a taste, and gain practical expertise in the field. It also allows researchers to apply techniques and tools to real-world software and discover novel practical (or even theoretical) challenges for future work.
The CREST Centre at UCL is a long-standing contributor of accepted papers to the Challenge Track. Their sustained success can be attributed in part to the organisation of a Jam Session in preparation for the Challenge Track submission deadline as part of the CREST Open Workshops (COW). This year’s edition of CREST Open Workshop Collaborative Jam Session for SSBSE Challenge Track will run from March 18th and 19th and is open to the public (see this year’s edition here). This year it will be at King’s College London.
This Jam Session runs over two consecutive days and is open to the public. The organisers of the session at CREST kindly agreed to share their methodology with the goal of motivating other research groups to replicate their efforts in producing successful Challenge Track submissions:
-
The organiser of the session overviews the Challenge Track call (e.g., describing how challenge track papers differ from technical research papers, subject systems, prizes, format and deadline).
-
The organiser leads a technical discussion on the Challenge Track’s proposed systems, with emphasis on their amenability to SBSE techniques and tools.
-
Attendees brainstorm and propose ideas (potential Challenge Track submissions).
-
Ideas are discussed and refined collectively. Attendees sign up for the ones they find more interesting and feasible. A team is formed for each of the most promising ideas; the person who proposed the idea becomes the team leader.
-
Break out into teams to turn selected ideas into projects and work on them throughout the first day.
At the end of the first day, the audience reconvenes; each team reports on their progress, proposes a plan for the second day, and collects feedback. Teams continue to work on their projects during the second day. Each team presents the status of their project at the end of the second day. Projects deemed infeasible are abandoned, and team members may join other teams.
At the end of the two-day Jam Session, the team leader oversees leading the effort to ensure the project results in a submission to the SSBSE Challenge Track.
Further Information
If you have any questions about the challenge, please email the Challenge Track chairs.
Mon 15 JulDisplayed time zone: Brasilia, Distrito Federal, Brazil change
11:00 - 12:30 | |||
11:00 30mResearch paper | Evolutionary Analysis of Alloy Specifications with an Adaptive Fitness Function Research Papers Jianghao Wang University of Nebraska-Lincoln, Clay Stevens University of Nebraska-Lincoln, Brooke Kidmose , Myra Cohen Iowa State University, Hamid Bagheri University of Nebraska-Lincoln | ||
11:30 30mResearch paper | Higher Fault Detection Through Novel Density Estimators in Unit Test Generation Research Papers Annibale Panichella Delft University of Technology, Mitchell Olsthoorn Delft University of Technology | ||
12:00 30mResearch paper | Many Independent Objective Estimation of Distribution Search for Android Testing Research Papers |
12:30 - 14:00 | |||
12:30 90mLunch | Lunch FSE Social Events |
14:00 - 15:30 | |||
14:00 22mShort-paper | Iterative Refactoring of Real-World Open-Source Programs with Large Language Models SSBSE Challenge Jinsu Choi , Gabin An Korea Advanced Institute of Science and Technology, Shin Yoo Korea Advanced Institute of Science and Technology | ||
14:22 22mShort-paper | Approximating Stochastic Quantum Noise through Genetic Programming SSBSE Challenge Asmar Muqeet Simula Research Laboratory and University of Oslo, Shaukat Ali Simula Research Laboratory and Oslo Metropolitan University, Paolo Arcaini National Institute of Informatics
| ||
14:45 22mShort-paper | Fuzzing-Based Differential Testing For Quantum Simulators SSBSE Challenge Daniel Blackwell University College London, Justyna Petke University College London, Yazhuo cao , Avner Bensoussan | ||
15:07 22mShort-paper | GreenStableYolo: Optimizing Inference Time and Image Quality of Text-to-Image Generation SSBSE Challenge Jingzhi Gong Loughborough University, Sisi Li , Giordano d'Aloisio University of L'Aquila, Zishuo Ding The Hong Kong University of Science and Technology (Guangzhou), Yulong Ye University of Birmingham, William B. Langdon University College London, Federica Sarro University College London |
15:30 - 16:00 | |||
15:30 30mCoffee break | Break FSE Social Events |