What Types of Automated Tests do Developers Write?
Software testing is a widely adopted quality assurance technique that assesses whether a software system meets a given specification. The overall goal of software testing is to develop effective tests that capture desired program behaviors and reveal defects. Automated software testing is an essential part of modern software development processes, in particular those that focus on continuous integration and deployment.
Existing test classifications (e.g., unit vs. integration testing) and testing best practices offer a general conceptual framework, however those classifications often include specific conceptual model of what is considered a unit, or even a test. These conceptual models are often not explicated in research papers or documentation which makes generalization of results difficult. Additionally comparatively little is known about how developers operationalize software testing and write and automate software tests in practice and how well developer written tests fit into those classification frameworks.
These problems make it difficult to apply test classification frameworks in modern industrial settings that have a large number of tests. This problem has become more difficult as software engineering processes evolved, and especially with the advent of AI generated unit test code.
This paper presents a novel test classification framework that was developed using ground truth data on what types of tests developers write in practice. The ground truth data was collected in an industrial setting in CompanyX and involves tens of thousands of developers and tens of millions of tests. We describe a proof of concept automated analysis that classifies tests using the developed classification framework and show the cost of reaching new state-of-the-art automated classification milestones. We also provide the results of the classification and make several novel observations on what types of tests developers write in practice.
The results are of interest to any researcher or practitioner in the software testing space.
Tue 29 AprDisplayed time zone: Eastern Time (US & Canada) change
11:00 - 12:30 | |||
11:00 22mFull-paper | An Adaptive Testing Approach Based on Field Data AST 2025 Samira Santos da Silva Gran Sasso Science Institute (GSSI), Ricardo Caldas Gran Sasso Science Institute (GSSI), Patrizio Pelliccione Gran Sasso Science Institute, L'Aquila, Italy, Antonia Bertolino Gran Sasso Science Institute Pre-print | ||
11:22 22mFull-paper | Exceptional Behaviors: How Frequently Are They Tested? AST 2025 Pre-print Media Attached | ||
11:45 22mFull-paper | Improving Examples in Web API Specifications using Iterated-Calls In-Context Learning AST 2025 Kush Jain Carnegie Mellon University, Kiran Kate IBM Research, Jason Tsay IBM Research, Claire Le Goues Carnegie Mellon University, Martin Hirzel IBM Research Pre-print | ||
12:07 22mFull-paper | What Types of Automated Tests do Developers Write? AST 2025 Marko Ivanković University of Passau, Luka Rimanić Google, Ivan Budiselic Google, Goran Petrovic Google; Universität Passau, Gordon Fraser University of Passau, René Just University of Washington Pre-print |