AST 2025
Sat 26 April - Sun 4 May 2025 Ottawa, Ontario, Canada
co-located with ICSE 2025
Tue 29 Apr 2025 12:07 - 12:30 at 211 - Session 4: When and How to Test

Software testing is a widely adopted quality assurance technique that assesses whether a software system meets a given specification. The overall goal of software testing is to develop effective tests that capture desired program behaviors and reveal defects. Automated software testing is an essential part of modern software development processes, in particular those that focus on continuous integration and deployment.

Existing test classifications (e.g., unit vs. integration testing) and testing best practices offer a general conceptual framework, however those classifications often include specific conceptual model of what is considered a unit, or even a test. These conceptual models are often not explicated in research papers or documentation which makes generalization of results difficult. Additionally comparatively little is known about how developers operationalize software testing and write and automate software tests in practice and how well developer written tests fit into those classification frameworks.

These problems make it difficult to apply test classification frameworks in modern industrial settings that have a large number of tests. This problem has become more difficult as software engineering processes evolved, and especially with the advent of AI generated unit test code.

This paper presents a novel test classification framework that was developed using ground truth data on what types of tests developers write in practice. The ground truth data was collected in an industrial setting in CompanyX and involves tens of thousands of developers and tens of millions of tests. We describe a proof of concept automated analysis that classifies tests using the developed classification framework and show the cost of reaching new state-of-the-art automated classification milestones. We also provide the results of the classification and make several novel observations on what types of tests developers write in practice.

The results are of interest to any researcher or practitioner in the software testing space.

Tue 29 Apr

Displayed time zone: Eastern Time (US & Canada) change

11:00 - 12:30
Session 4: When and How to TestAST 2025 at 211

Session chair: Mehrdad Saadatmand

11:00
22m
Full-paper
An Adaptive Testing Approach Based on Field Data
AST 2025
Samira Santos da Silva Gran Sasso Science Institute (GSSI), Ricardo Caldas Gran Sasso Science Institute (GSSI), Patrizio Pelliccione Gran Sasso Science Institute, L'Aquila, Italy, Antonia Bertolino Gran Sasso Science Institute
Pre-print
11:22
22m
Full-paper
Exceptional Behaviors: How Frequently Are They Tested?
AST 2025
Andre Hora UFMG, Gordon Fraser University of Passau
Pre-print Media Attached
11:45
22m
Full-paper
Improving Examples in Web API Specifications using Iterated-Calls In-Context Learning
AST 2025
Kush Jain Carnegie Mellon University, Kiran Kate IBM Research, Jason Tsay IBM Research, Claire Le Goues Carnegie Mellon University, Martin Hirzel IBM Research
Pre-print
12:07
22m
Full-paper
What Types of Automated Tests do Developers Write?
AST 2025
Marko Ivanković University of Passau, Luka Rimanić Google, Ivan Budiselic Google, Goran Petrovic Google; Universität Passau, Gordon Fraser University of Passau, René Just University of Washington
Pre-print