ICST 2025
Mon 31 March - Fri 4 April 2025 Naples, Italy
Tue 1 Apr 2025 16:50 - 17:20 at Room B - Technical Program

This study explores prompt engineering for automated white-box integration testing of RESTful APIs using Large Language Models (LLMs). Four versions of prompts were designed and tested across three OpenAI models (GPT-3.5 Turbo, GPT-4 Turbo, and GPT-4o) to assess their impact on code coverage, token consumption, execution time, and financial cost. The results indicate that different prompt versions, especially with more advanced models, achieved up to 90% coverage, although at higher costs. Additionally, combining test sets from different models increased coverage, reaching 96% in some cases. We also compared the results with EvoMaster, a specialized tool for generating tests for REST APIs, where LLM-generated tests achieved comparable or higher coverage in the benchmark projects. Despite higher execution costs, LLMs demonstrated superior adaptability and flexibility in test generation.

Tue 1 Apr

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

16:00 - 17:20
Technical ProgramAIST at Room B
16:00
30m
Talk
Generating Latent Space-Aware Test Cases for Neural Networks Using Gradient-Based Search
AIST
Simon Speth Technical University of Munich, Christoph Jasper TUM, Claudius Jordan , Alexander Pretschner TU Munich
Pre-print
16:30
20m
Talk
Test2Text: AI-Based Mapping between Autogenerated Tests and Atomic Requirements
AIST
Elena Treshcheva Exactpro, Iosif Itkin Exactpro Systems, Rostislav Yavorskiy Exactpro Systems, A: Nikolai Dorofeev
16:50
30m
Talk
LLM Prompt Engineering for Automated White-Box Integration Test Generation in REST APIs (pre-recorded video presentation + online Q&A)
AIST
André Mesquita Rincon Federal Institute of Tocantins (IFTO) / Federal University of São Carlos (UFSCar), Auri Vincenzi Federal University of São Carlos, João Pascoal Faria Faculty of Engineering, University of Porto and INESC TEC
:
:
:
: