TCSE logo 
 Sigsoft logo
Sustainability badge
Fri 2 May 2025 12:15 - 12:30 at 204 - Program Comprehension 3 Chair(s): Arie van Deursen

Mocking allows testing program units in isolation. A developer who writes tests with mocks faces two challenges: design realistic interactions between a unit and its environment; and understand the expected impact of these interactions on the behavior of the unit. In this paper, we propose to monitor an application in production to generate tests that mimic realistic execution scenarios through mocks. Our approach operates in three phases. First, we instrument a set of target methods for which we want to generate tests, as well as the methods that they invoke, which we refer to as mockable method calls. Second, in production, we collect data about the context in which target methods are invoked, as well as the parameters and the returned value for each mockable method call. Third, offline, we analyze the production data to generate test cases with realistic inputs and mock interactions. The approach is automated and implemented in an open-source tool called rick . We evaluate our approach with three real-world, open-source Java applications. rick monitors the invocation of 128 methods in production across the three applications and captures their behavior. Based on this captured data, rick generates test cases that include realistic initial states and test inputs, as well as mocks and stubs. All the generated test cases are executable, and 52.4% of them successfully mimic the complete execution context of the target methods observed in production. The mock-based oracles are also effective at detecting regressions within the target methods, complementing each other in their fault-finding ability. We interview 5 developers from the industry who confirm the relevance of using production observations to design mocks and stubs. Our experimental findings clearly demonstrate the feasibility and added value of generating mocks from production interactions.

Fri 2 May

Displayed time zone: Eastern Time (US & Canada) change

11:00 - 12:30
Program Comprehension 3Research Track / Journal-first Papers at 204
Chair(s): Arie van Deursen TU Delft
11:00
15m
Talk
Automated Test Generation For Smart Contracts via On-Chain Test Case Augmentation and MigrationBlockchain
Research Track
Jiashuo Zhang Peking University, China, Jiachi Chen Sun Yat-sen University, John Grundy Monash University, Jianbo Gao Peking University, Yanlin Wang Sun Yat-sen University, Ting Chen University of Electronic Science and Technology of China, Zhi Guan Peking University, Zhong Chen
Pre-print
11:15
15m
Talk
Boosting Code-line-level Defect Prediction with Spectrum Information and Causality Analysis
Research Track
Shiyu Sun , Yanhui Li Nanjing University, Lin Chen Nanjing University, Yuming Zhou Nanjing University, Jianhua Zhao Nanjing University, China
11:30
15m
Talk
BatFix: Repairing language model-based transpilation
Journal-first Papers
Daniel Ramos Carnegie Mellon University, Ines Lynce INESC-ID/IST, Universidade de Lisboa, Vasco Manquinho INESC-ID; Universidade de Lisboa, Ruben Martins Carnegie Mellon University, Claire Le Goues Carnegie Mellon University
11:45
15m
Talk
Tracking the Evolution of Static Code Warnings: The State-of-the-Art and a Better Approach
Journal-first Papers
Junjie Li , Jinqiu Yang Concordia University
12:00
15m
Talk
PACE: A Program Analysis Framework for Continuous Performance Prediction
Journal-first Papers
Chidera Biringa University of Massachusetts, Gokhan Kul University of Massachusetts Dartmouth
12:15
15m
Talk
Mimicking Production Behavior With Generated Mocks
Journal-first Papers
Deepika Tiwari KTH Royal Institute of Technology, Martin Monperrus KTH Royal Institute of Technology, Benoit Baudry Université de Montréal
:
:
:
: