The SSBSE Challenge Track is an exciting opportunity for SBSE researchers to apply tools, techniques, and algorithms to real-world software. Participants can use their expertise to carry out analyses on open source software projects or to directly improve the infrastructure powering research experiments. The principal criterion is to produce interesting results and to apply your expertise to challenge the state of the art and inspire future SBSE research.
Call for Challenge Solutions
We are excited to announce the Challenge Track for SSBSE 2023, seeking research papers focused on resolving challenge cases using SBSE techniques to produce relevant insights and interesting results.
You may choose either of the following challenge cases as the subject of your research for the SSBSE 2023 Challenge Track!
The gem5 architecture simulator, https://www.gem5.org, is a modular platform for computer-system architecture research. Written in CPP and Python, gem5 is a large, complex tool. It is used in both academia and industry to simulate computer systems at varying levels of fidelity.
In this open-ended challenge, SBSE research should be carried out on the gem5 codebase or simulation inputs. Beyond this, there are no rules on what may be attempted. Here are some suggestions:
- Search-Based Exploration of Design Space: Conduct experiments using search algorithms to explore the design space of gem5 architecture simulations and find optimal computer architectural setups for specific environments and inputs.
- Automated Testing: Use search-based techniques to automatically generate test cases that stress-test gem5 and/or detect bugs.
- Automated Bug Fixing: Fix bugs in gem5 using SBSE techniques. Bugs reported by users can be found at https://gem5.atlassian.net.
- Automatic parallelization: gem5 is single threaded. Use search-based approaches to enable multithreading.
- Co-optimization of Software and Hardware: Use SBSE-based co-optimization techniques to modify both the gem5 simulated system and the software it is intended to run.
- Source-code evolution : The gem5 git repository contains over 20,000 commits with detailed logs going back to 2003. Use this data to detail how humans navigate the space of software improvement. Remember, these suggestions are merely meant to inspire. You can tailor them to fit your interests and objectives, or ignore them entirely.
Resources on building, using, and developing gem5 can generally be found here: https://www.gem5.org. For this challenge, we are also providing a repository which comes packaged with some premade examples for optimization, found here: https://github.com/BobbyRBruce/gem5-ssbse-challenge-2023. Please consult the “README.md” for some helpful pointers on getting started.
This challenge focuses on SBSE and Large Language Models (LLMs). You are invited to explore these two paradigms, the potential synergies between them, and the ways they can enhance the software engineering domain. We welcome submissions that cover but are not limited to the following topics:
- Applications of Large Language Models (LLMs) in Search-Based Software Engineering (SBSE): Potential research opportunities could include novel integration strategies for LLMs in the SBSE processes, examining the effect of such applications on the quality and efficiency of software development tasks, or exploring how LLMs can assist in search-based problem-solving for software engineering tasks.
- Search-based optimisation techniques for improving LLMs’ efficiency in software engineering tasks: This could involve investigations into techniques to improve the performance of LLMs in software engineering contexts, optimising the prompts to enhance LLM’s response, or tailoring LLMs’ responses to better suit specific software engineering tasks.
- Evaluation and benchmarking of LLMs for SBSE tasks, including LLM output evaluation: There’s a need for new studies to evaluate and benchmark the effectiveness of LLMs in SBSE, including research into methodologies for objectively evaluating the output of these models. This could include the development of new metrics or the application of existing ones in innovative ways.
- Search-based techniques for the potential use of SBSE in training and/or fine-tuning LLMs: This could involve research into how search-based techniques can be utilized in the training/fine-tuning process of LLMs, including the exploration of novel fine-tuning methods, or how these techniques can assist in discovering optimal processes for training/fine-tuning LLMs.
- Search-based optimisation techniques applied on tools created with/for LLMs: This research could explore how search-based optimisation techniques can be used to enhance the performance and usability of tools designed to work with or for LLMs. This might involve the evaluation and optimisation of specific tools like those available at Stability AI, or the development of new tools and techniques.
- Practical experiences, case studies, and industrial perspectives related to the use of LLMs in conjunction with SBSE: The focus could be on empirical studies that document the practical use of LLMs and SBSE in real-world software engineering. This could include case studies of specific projects or surveys of industry practices, potentially highlighting successful applications, limitations, and future opportunities.
A challenge-track participant should:
- Perform original SBSE research using or enhancing the challenge programs and/or their artefacts.
- Report the findings in a six-page paper using the regular symposium format. Note that these findings must not have been previously published in any peer-reviewed venue.
- Submit the challenge-track report by the deadline.
- Present the findings at SSBSE 2023 if the submission is accepted.
It is not mandatory for submissions to the SSBSE Challenge track to implement a new tool, technique, or algorithm. However, we do expect that applying your existing or new tools/techniques/algorithms to the challenge programs will lead to relevant insights and interesting results.
The criteria for paper acceptance are the following:
- Application of an SBSE technique to either analyze or enhance the challenge program and/or its accompanying artefacts or any interesting finding on the application of LLMs for SBSE problems (and vice versa).
- Technical soundness.
- Readability and presentation.
All accepted submissions will compete for a cash prize of $500 USD. The winning paper will be selected by the co-chairs, based on the reviews of the submitted papers, and will be announced at SSBSE 2023.
Submissions must be, at most, six pages long in PDF format and should conform at the time of submission to the SSBSE/Springer LNCS format and general submission guidelines provided by the “Format and submission” section of the Research Track .
Submissions must not have been previously published or be in consideration for any journal, book, or other conference. Please submit your challenge paper to HotCRP before the Challenge Solution deadline. At least one author of each paper is expected to register for SSBSE 2023. In-person presentations are desirable, but online presentations may be allowed subject to circumstances. Papers for the Challenge Solution track are also required to follow double-anonymous restrictions. All accepted contributions will be published in the conference proceedings.
Submissions can be made via HotCRP (https://ssbse23challenge.hotcrp.com) by the submission deadline.
The SSBSE Challenge track is a good opportunity for new researchers to join the SBSE community, develop a taste, and gain practical expertise in the field. It also allows researchers to apply techniques and tools to real-world software and discover novel practical (or even theoretical) challenges for future work.
The CREST Centre at UCL is a long-standing contributor of accepted papers to the Challenge Track. Their sustained success can be attributed in part to the organisation of a Jam Session in preparation for the Challenge Track submission deadline as part of the CREST Open Workshops (COW). This year’s edition of CREST Open Workshop Collaborative Jam Session for SSBSE Challenge Track will run from August 3rd to 4th, and is open to the public (see this year’s edition here).
This Jam Session runs over two consecutive days and is open to the public. The organisers of the session at CREST kindly agreed to share their methodology with the goal of motivating other research groups to replicate their efforts in producing successful Challenge Track submissions:
- The organiser of the session overviews the Challenge Track call (e.g., describing how challenge track papers differ from technical research papers, subject systems, prizes, format and deadline).
- The organiser leads a technical discussion on the Challenge Track’s proposed systems, with emphasis on their amenability to SBSE techniques and tools.
- Attendees brainstorm and propose ideas (potential Challenge Track submissions).
- Ideas are discussed and refined collectively. Attendees sign up for the ones they find more interesting and feasible. A team is formed for each of the most promising ideas; the person who proposed the idea becomes the team leader.
- Break out into teams to turn selected ideas into projects and work on them throughout the first day.
- At the end of the first day, the audience reconvenes; each team reports on their progress, proposes a plan for the second day, and collects feedback.
- Teams continue to work on their projects during the second day. Each team presents the status of their project at the end of the second day. Projects deemed infeasible are abandoned, and team members may join other teams.
- At the end of the two-day Jam Session, the team leader is in charge of leading the effort to ensure the project results in a submission to the SSBSE Challenge Track.
If you have any questions about the challenge, please email the Challenge Track chairs.