AST 2025
Sat 26 April - Sun 4 May 2025 Ottawa, Ontario, Canada
co-located with ICSE 2025

This program is tentative and subject to change.

Mon 28 Apr 2025 11:30 - 12:00 at 211 - Session 1: LLM for Testing

Writing good software tests can be challenging, therefore approaches that support developers are desirable. While generating complete tests automatically is such an approach commonly proposed in research, developers may already have specific test scenarios in mind and thus just require help in selecting the most suitable test assertions for these scenarios. This can be done using deep learning models to predict assertions for given test code. Prior research on assertion generation trained these models specifically for the task, raising the question how much the use of larger models pre-trained on code that have emerged since then can improve their performance. In particular, while abstracting identifiers has been shown to improve specifically trained models, it remains unclear whether this also generalizes to models pre-trained on non-abstracted code. Finally, even though prior work demonstrated high accuracy it remains unclear how this translates into the effectiveness of the assertions at their intended application – finding faults. To shed light on these open questions, in this paper we propose AsserT5, a new model based on the pre-trained CodeT5 model, and use this to empirically study assertion generation. We find that the abstraction and the inclusion of the focal method are useful also for a fine-tuned pre-trained model, resulting in test assertions that match the ground truth assertions precisely in up to 59.5% of cases, more than twice as precise as prior models. However, evaluation on real bugs from the Defects4J dataset shows that out of 138 bugs detectable with assertions in real-world projects, AsserT5 was only able to suggest fault-finding assertions for 33, indicating the need for further improvements.

This program is tentative and subject to change.

Mon 28 Apr

Displayed time zone: Eastern Time (US & Canada) change

11:00 - 12:30
Session 1: LLM for TestingAST 2025 at 211
11:00
30m
Full-paper
Acceptance Test Generation with Large Language Models: An Industrial Case Study
AST 2025
Margarida Ferreira University of Porto and Critical TechWorks, Luís Viegas University of Porto and Critical TechWorks, João Pascoal Faria Faculty of Engineering, University of Porto and INESC TEC, Bruno Lima Faculty of Engineering of the University of Porto & LIACC
11:30
30m
Full-paper
AsserT5: Test Assertion Generation Using a Fine-Tuned Code Language Model
AST 2025
Severin Primbs University of Passau, Benedikt Fein University of Passau, Gordon Fraser University of Passau
Pre-print
12:00
30m
Full-paper
Simulink Mutation Testing using CodeBERT
AST 2025
Jingfan Zhang University of Ottawa, Delaram Ghobari University of Ottawa, Mehrdad Sabetzadeh University of Ottawa, Shiva Nejati University of Ottawa
Pre-print
:
:
:
: