SSBSE 2025
Sun 16 Nov 2025 Seoul, South Korea
co-located with ASE 2025

The SSBSE Challenge Track is an exciting opportunity for SBSE researchers to apply tools, techniques, and algorithms to real-world software. Participants can use their expertise to carry out analyses on open source software projects or to directly improve the infrastructure powering research experiments. The principal criterion is to produce interesting results and to apply your expertise to challenge the state of the art and inspire future SBSE research.

Accepted Papers

Title
Fuzz Smarter, Not Harder: Towards Greener Fuzzing with GreenAFL
SSBSE Challenge
GA4GC: Greener Agent for Greener Code via Multi-Objective Configuration Optimization
SSBSE Challenge
GreenMalloc: Allocator Optimisation for Industrial Workloads
SSBSE Challenge
HotCat: Green and Effective Feature Selection for HotFix Bug Taxonomy
SSBSE Challenge

Call for Solutions

We are excited to announce the Challenge Track for SSBSE 2025, seeking research papers focused on resolving challenge cases using SBSE techniques to produce relevant insights and interesting results. You may choose either of the following challenge cases as the subject of your research for the SSBSE 2025 Challenge Track!

Challenge Cases

SSBSE 2025 Challenge Case: Hot Fixing Benchmark

We invite contributions to the SSBSE 2025 Challenge Track that explore Search-Based Software Engineering (SBSE) techniques in the context of hot fixing: unplanned improvements to specific time-critical issues deployed to a software system in production. Hot fixes represent one of the most urgent and high-stakes software maintenance scenarios, where failures are already impacting users and organizations cannot afford prolonged patching cycles. These scenarios introduce unique technical and engineering challenges: incomplete testing, limited debugging time, and high risk of regression. Despite their importance, hot fixes remain underexplored in the SBSE community.

The Challenge

We provide a curated benchmark of hot fixes, built as an extension of the widely used Bugs.jar dataset. The benchmark includes 10 Apache projects that cover 88 unique hot fixes. Each includes:

  • A time-critical Java bug that was addressed with a hot fix

  • A test suite that replicates the bug

  • The developer-written patch for the bug

  • Associated Jira issue for the bug

  • Failing test results for the bug

Participants may choose all or a subset of bugs of their choice to experiment with.

What You Can Do

We invite creative SBSE-based solutions, including (but not limited to):

  • Fault localization: Use SBSE techniques to identify the faulty regions affected by the hot fix.

  • Test suite improvement: Apply SBSE to strengthen the test suite so that time-critical bugs may be caught before deployment.

  • Repair and patching: Explore SBSE-driven automated program repair techniques.

  • Test minimization: Use SBSE to minimize verification effort (e.g., reduce test suite size while maintaining bug detection).

  • Predictive analysis: SBSE-based prediction of future hot fix needs or fault-prone areas.

  • Documentation support: Use SBSE to generate summaries or update developer documentation surrounding hot fixes.

  • Removing bug-replicating tests and regenerating them using generalizable SBSE techniques to synthesize bug-replicating tests.

We also encourage approaches that combine SBSE with Large Language Models (LLMs) to enhance effectiveness across any of these tasks.

Please find the benchmark on GitHub

SSBSE 2025 Challenge Case: Green SBSE

Search-Based Software Engineering (SBSE) techniques are increasingly powerful yet also increasingly computationally expensive. As large-scale test generation, fault localization, and program repair methods become more sophisticated, their energy consumption and carbon footprint also grow, raising concerns about their sustainability and environmental impact.

What You Can Do

We invite creative SBSE-based solutions, including (but not limited to):

  • Novel green SBSE algorithms or frameworks.

  • Empirical studies comparing energy footprints of SBSE techniques.

  • Tooling or infrastructure to support sustainable experimentation.

  • Adaptive termination of SBSE algorithms e.g. by integrating carbon footprint data into the search process.

  • LLM+SBSE tradeoffs: Evaluate the energy impact of combining SBSE with large language models (LLMs), and explore greener alternatives.

  • Any contribution that advances the goal of lowering the environmental cost of search-based software engineering.

The goal of this challenge is to explore how we can make SBSE greener, by reducing resource consumption without sacrificing effectiveness.

SSBSE 2025 Challenge Case: Tools, Benchmarks, and Technology Transfer

One of the long-standing goals of Search-Based Software Engineering (SBSE) is to make a tangible impact in real-world software development environments. This challenge focuses on bridging the gap between research and practice with an emphasis on tools, benchmarks, and case studies that demonstrate technology transfer into industrial or production settings.

What You Can Do

We invite creative SBSE-based solutions, including (but not limited to):

  • Case studies or experience reports involving SBSE techniques applied in collaboration with industry partners.

  • Open-source tools or frameworks that support reproducible and scalable SBSE research.

  • Challenges or lessons learned in deploying SBSE tools in real-world pipelines.

  • Comparative evaluations showing how SBSE performs on practical tasks in production environments.

  • Technology transfer stories: Successes, failures, and what it takes to make SBSE adoption work in practice.

We particularly encourage:

  • Solutions that demonstrate reusability, extensibility, or integration potential with existing development tools.

  • Collaborations with industrial or open-source maintainers that lead to measurable improvements in practice.

This challenge is ideal for showcasing the real-world relevance and transferability of SBSE techniques.

Submitting to the Challenge Track

A challenge-track participant should:

  • Perform original SBSE research using or enhancing the challenge programs and/or their artefacts.

  • Report the findings in a six-page paper using the regular symposium format. Note that these findings must not have been previously published in any peer-reviewed venue.

  • Submit the challenge-track report by the deadline.

  • Present the findings at SSBSE 2025 if the submission is accepted.

It is not mandatory for submissions to the SSBSE Challenge track to implement a new tool, technique, or algorithm. However, we do expect that applying your existing or new tools/techniques/algorithms to the challenge programs will lead to relevant insights and interesting results.

Acceptance Criteria

The criteria for paper acceptance are the following:

  • Application of an SBSE technique to either analyse or enhance the challenge program and/or its accompanying artefacts or any interesting finding on the application of selected topics for SBSE problems (and vice versa).

  • Technical soundness.

  • Readability and presentation.

Submission details

Submissions must be, at most, six pages long in PDF format and should conform at the time of submission to the LNCS format and general submission guidelines provided by the “Format and submission” section of the Research Track.

Submissions must not have been previously published or be in consideration for any journal, book, or other conference. Please submit your challenge paper before the Challenge Solution deadline. At least one author of each paper is expected to register for SSBSE 2025. In-person presentations are desirable, but online presentations may be allowed subject to circumstances. Papers for the Challenge Solution track are also required to follow double-anonymous restrictions. All accepted contributions will be published in the conference proceedings.

Papers can be submitted through the following link: https://ssbse2025.hotcrp.com/

Challenge Track Collaborative Jam Sessions

The SSBSE Challenge track is a good opportunity for new researchers to join the SBSE community, develop a taste, and gain practical expertise in the field. It also allows researchers to apply techniques and tools to real-world software and discover novel practical (or even theoretical) challenges for future work.

The CREST Centre at UCL is a long-standing contributor of accepted papers to the Challenge Track. Their sustained success can be attributed in part to the organisation of a Jam Session in preparation for the Challenge Track submission deadline as part of the CREST Open Workshops (COW). This year’s edition of CREST Open Workshop Collaborative Jam Session for SSBSE Challenge Track will run from September 1st to September 2nd 2025 at King’s College London.

This Jam Session runs over two consecutive days and is open to the public. The organisers of the session at CREST kindly agreed to share their methodology with the goal of motivating other research groups to replicate their efforts in producing successful Challenge Track submissions:

  • The organiser of the session overviews the Challenge Track call (e.g., describing how challenge track papers differ from technical research papers, subject systems, prizes, format and deadline).

  • The organiser leads a technical discussion on the Challenge Track’s proposed systems, with emphasis on their amenability to SBSE techniques and tools.

  • Attendees brainstorm and propose ideas (potential Challenge Track submissions).

  • Ideas are discussed and refined collectively. Attendees sign up for the ones they find more interesting and feasible. A team is formed for each of the most promising ideas; the person who proposed the idea becomes the team leader.

  • Break out into teams to turn selected ideas into projects and work on them throughout the first day.

At the end of the first day, the audience reconvenes; each team reports on their progress, proposes a plan for the second day, and collects feedback. Teams continue to work on their projects during the second day. Each team presents the status of their project at the end of the second day. Projects deemed infeasible are abandoned, and team members may join other teams.

At the end of the two-day Jam Session, the team leader oversees leading the effort to ensure the project results in a submission to the SSBSE Challenge Track.

Sign up here if you are able to join!

Further Information

If you have any questions about the challenge, please email the Challenge Track chairs.