Write a Blog >>
SSBSE 2021
Mon 11 - Tue 12 October 2021
co-located with ESEM 2021

The SSBSE Challenge Track is an exciting opportunity for SBSE researchers to apply tools, techniques, and algorithms to real-world software. Participants can use their expertise to carry out analyses on open source software projects or to directly improve the infrastructure powering research experiments. The principal criterion is to produce interesting results and to apply your expertise to challenge the state of the art and inspire future SBSE research.

Dates
Tracks
You're viewing the program in a time zone which is different from your device's time zone change time zone

Mon 11 Oct

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

12:10 - 13:30
SSBSE Session 1Research Papers / Challenge at SSBSE ROOM
Chair(s): Erik Hemberg Massachusetts Institute of Technology
12:10
30m
Talk
Generating Failing Test Suites for Quantum Programs with Search
Research Papers
Xinyi Wang Nanjing University of Aeronautics and Astronautics, Paolo Arcaini National Institute of Informatics , Tao Yue Nanjing University of Aeronautics and Astronautics, Shaukat Ali Simula Research Laboratory, Norway
Link to publication DOI
12:40
20m
Talk
Searching for Multi-Fault Programs in Defects4J
Challenge
Gabin An KAIST, Juyeon Yoon Korea Advanced Institute of Science and Technology, Shin Yoo KAIST
Link to publication Pre-print Media Attached
13:00
30m
Talk
Search-based Selection and Prioritization of Test Scenarios for Autonomous Driving Systems
Research Papers
Chengjie Lu Nanjing University of Aeronautics and Astronautics, Huihui Zhang Weifang University, Tao Yue Nanjing University of Aeronautics and Astronautics, Shaukat Ali Simula Research Laboratory, Norway
14:30 - 15:40
SSBSE Session 2Challenge / Research Papers / RENE - Replications and Negative Results at SSBSE ROOM
Chair(s): Wesley Assunção Pontifical Catholic University of Rio de Janeiro (PUC-Rio)
14:30
20m
Talk
Empirical Study of Effectiveness of EvoSuite on SBST 2020 Tool Competition Benchmark
RENE - Replications and Negative Results
Robert Sebastian Herlim KAIST, Shin Hong Handong Global University, Yunho Kim Hanyang University, Moonzoo Kim KAIST and V+Lab
14:50
30m
Talk
Multi-objective Test Case Selection Through Linkage Learning-driven Crossover
Research Papers
Mitchell Olsthoorn Delft University of Technology, Annibale Panichella Delft University of Technology
Link to publication DOI Pre-print
15:20
20m
Talk
Refining Fitness Functions for Search-Based Automated Program Repair: A Case Study with ARJA and ARJA-e
Challenge
Giovani Guizzo University College London, Aymeric Blot University College London, James Callan UCL, Justyna Petke University College London, Federica Sarro University College London
Link to publication Pre-print
16:40 - 18:00
SSBSE Session 3RENE - Replications and Negative Results / Research Papers at SSBSE ROOM
Chair(s): José Raúl Romero University of Cordoba, Spain
16:40
30m
Talk
Hybrid Multi-level Crossover for Unit Test Case Generation
Research Papers
Mitchell Olsthoorn Delft University of Technology, Pouria Derakhshanfar Delft University of Technology, Annibale Panichella Delft University of Technology
Link to publication DOI Pre-print
17:10
20m
Talk
Improving Android App Responsiveness through Search-Based Frame Rate Reduction
RENE - Replications and Negative Results
James Callan UCL, Justyna Petke University College London
17:30
30m
Talk
Search-based Automated Play Testing of Computer Games: a model-based approach
Research Papers
Raihana Ferdous Fondazione Bruno Kessler, Fitsum Kifetew Fondazione Bruno Kessler, Davide Prandi Fondazione Bruno Kessler, Wishnu Prasetya Utrecht University, Samira Shirzadehhajimahmood Utrecht University, Angelo Susi Fondazione Bruno Kessler
Pre-print

Tue 12 Oct

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

11:30 - 12:30
SSBSE Session 4Research Papers at SSBSE ROOM
Chair(s): Giovani Guizzo University College London
11:30
30m
Talk
Preliminary Evaluation of SWAY in Permutation Decision Space via a Novel Euclidean Embedding
Research Papers
Junghyun Lee Korea Advanced Institute of Science and Technology (KAIST), Chani Jung Korea Advanced Institute of Science and Technology, Yoo Hwa Park Korea Advanced Institute of Science and Technology, Dongmin Lee Korea Advanced Institute of Science and Technology, Juyeon Yoon Korea Advanced Institute of Science and Technology, Shin Yoo KAIST
Link to publication
12:00
30m
Talk
Enhancing Resource-based Test Case Generation For RESTful APIs with SQL Handling
Research Papers
Man Zhang Kristiania University College, Norway, Andrea Arcuri Kristiania University College and Oslo Metropolitan University
Pre-print

Call for Challenge Solutions

Call for Challenge Solutions for SSBSE 2021

The SSBSE Challenge Track is an exciting opportunity for SBSE researchers to apply tools, techniques, and algorithms to real-world software. Participants can use their expertise to carry out analyses on open source software projects or to directly improve the infrastructure powering research experiments. The principal criterion is to produce interesting results and to apply your expertise to challenge the state of the art and inspire future SBSE research.

All accepted submissions will compete for cash prizes totaling up to €1000. Up to three winners will be selected by the co-chairs, based on the reviews of the submitted papers. We thank Facebook for the generous sponsorship. The winners will be announced at SSBSE 2021.

Important Dates

  • Full Paper Submission: June 23, 2021
  • Author notification: July 21, 2021
  • Camera-ready submission: July 28, 2021

The Challenge Case: Defects4J

Defects4J is an extensible collection of reproducible bugs from Java software systems, together with a supporting infrastructure, and aims at advancing software engineering research. Defects4J is available at: https://defects4j.org

The current version of Defects4J (v2.0.0) targets Java 8 and consists of 835 bugs from 17 open-source projects. The 835 bugs span more than a decade of development history, and the 17 projects span a wide range of domains, including compilers, parsers, testing infrastructure, and a variety of libraries. Each bug is traceable to a reported issue, is reproducible using developer-written test cases, and is isolated - with unrelated code changes having been removed. The framework provides extensive metadata for each bug. A process exists for expanding the framework with additional projects and bugs. Defect4J also offers support for research experiments, including capabilities to checkout, compile, and test the case examples, generate test suites, and perform coverage and mutation analysis.

Defects4J is a challenging and diverse benchmark for SBSE research:

  • The framework offers a rich selection of case examples for the empirical validation of SBSE approaches.
  • The bugs and metadata offer data and potential optimization targets that could form the basis of new SBSE approaches or application areas.
  • SBSE researchers could enhance the Defects4J framework itself, e.g.:
    • Integrating new projects and bugs, particularly those that will challenge state-of-the-art SBSE approaches, or additional test generation approaches.
    • Automating extension of the framework to include new projects or bugs, including automating build script creation or bug isolation.
    • Extension of the framework to provide support for a wider variety of research areas (e.g., an interface for integrating a group of SBSE tools).

For more inspiration, please see the challenge case proposal for Defects4J: https://greg4cr.github.io/pdf/20d4j.pdf. Please note that you are NOT restricted to the ideas presented above or in this proposal; interesting research results of any kind using or enhancing Defects4J are welcome.

 Submitting to the Challenge Track

A challenge-track participant should:

  • Perform original SBSE research using or enhancing the Defects4J framework.
  • Report the findings in a six-page paper using the regular symposium format. Note that these findings must not have been previously published in any peer-reviewed venue.
  • Submit the challenge-track report by the deadline.
  • Present the findings at SSBSE 2021, if the submission is accepted.

It is not mandatory for submissions to the SSBSE Challenge track to implement a new tool, technique, or algorithm. However, we do expect that applying your existing or new tools/techniques/algorithms to Defects4J will lead to relevant insights and interesting results.

The criteria for paper acceptance are the following:

  • Application of an SBSE technique to either analyze or enhance Defects4J.
  • Technical soundness.
  • Readability and presentation.

Submission details

Submissions must be, at most, six pages long in PDF format and should conform at time of submission to the SSBSE/Springer LNCS format and submission guidelines. Submissions must not have been previously published, or be in consideration for, any journal, book, or other conference. Please submit your challenge paper to EasyChair on, or before, the Challenge Solution deadline. At least one author of each paper is expected to present at SSBSE 2021. Papers for the Challenge Solution track are also required to follow double-blind restrictions. All accepted contributions will be published in the conference proceedings.

Submissions can be made via Easychair (https://easychair.org/conferences/?conf=ssbse2021) by the submission deadline.

 Further Information

Authors who extend the Defects4J framework are encouraged to contribute their extensions via a pull request to the GitHub repository. A forthcoming journal paper on the most recent version of the Defects4J framework is in the planning stages. The authors of submissions that extend Defects4J in a substantial manner will be invited as co-authors of this forthcoming submission.

If you have any questions about the challenge, please email the Challenge Track chairs.

Call for Challenge Cases

Call for Challenge Cases for SSBSE 2022

To collect relevant challenge cases for the next edition of the SSBSE conference, we invite the submission of proposals. Submitted cases are expected to pose interesting and relevant research challenges to the SBSE community, and proposals will be reviewed with regard to their clarity, relevance, and potential to inspire interesting research submissions.

Accepted cases will be part of the official conference proceedings. The authors of the accepted cases must attend the challenge track and participate in the discussion. The next edition of SSBSE will call for solutions to the accepted cases.

 Important dates

  • Full Paper Submission: June 23, 2021
  • Author notification: July 21, 2021
  • Camera-ready submission: July 28, 2021

 Submitting a Challenge Case Proposal

A challenge case paper can present (1) a particular data set with specific questions or (2) a call for a solution to a particular problem in a given context. The following information is needed for both types of case descriptions:

For data sets with specific questions:

  • What is your dataset and how was it obtained?
  • What is its size?
  • How can this data be accessed?
  • Do you provide tools to process the data?
  • Provide at least one concrete research question you want participants to answer. You can provide multiple questions.
  • For each question, provide the criteria for evaluating an answer/solution.

For solutions to specific problems:

  • What is the concrete problem you want participants to solve?
  • How can a solution be evaluated? The following are some ideas on how to specify evaluation criteria:
    • Concrete evaluation metrics (e.g., precision, recall, accuracy, etc. depending on the problem).
    • Concrete test cases participants can evaluate their solution against (e.g, provide input and expected output and participants are expected to provide a solution that gets from one point to the other).
    • A list of systems to evaluate their solution against (e.g., a list of C systems that have a large number of nested #ifdefs, because your problem only makes sense in the context of higher-order variability).
    • A reference implementation to compare against, according to particular metrics.

Additional requirements for both types of cases:

The description should contain the URL of a public repository or artifact page that contains all the instructions needed to get started with the case study Optional: Case authors may include a list of 5 names of researchers or practitioners who have the expertise required to evaluate submitted solutions. The case authors themselves may be part of this list. The challenge track co-chairs will consider this list when creating the SSBSE challenge program committee next year.

Submission details

The papers must be at most six pages long in PDF format and should conform at time of submission to the SSBSE/Springer LNCS format and submission guidelines. Please submit your challenge case to EasyChair on, or before, the Challenge Case track deadline. At least one author of each paper is expected to present at SSBSE 2021. All accepted contributions will be published in the conference proceedings.

Further Information

If you have any questions about the challenge, please email the Challenge Track chairs.