On the Effectiveness of LLMs for Manual Test Verifications
This program is tentative and subject to change.
Background: Manual testing is vital for detecting issues missed by automated tests, but specifying accurate ver- ifications is challenging. Aims: This study aims to explore the use of Large Language Models (LLMs) to produce verifications for manual tests. Method: We conducted two independent and complementary exploratory studies. The first study involved using 2 closed-source and 6 open-source LLMs to generate verifications for manual test steps and evaluate their similarity to original verifications. The second study involved recruiting software testing professionals to assess their perception and agreement with the generated verifications compared to the original ones. Results: The open-source models Mistral-7B and Phi-3-mini-4k demonstrated effectiveness and consistency comparable to closed-source models like Gemini-1.5-flash and GPT-3.5-turbo in generating manual test verifications. However, the agreement level among professional testers was slightly above 40%, indicating both promise and room for improvement. While some LLM-generated verifications were considered better than the originals, there were also concerns about AI hallucinations, where verifications significantly deviated from expectations. Conclusion: We contributed by evaluating the effectiveness of 8 LLMs through similarity and human acceptance studies, identifying top-performing models like Mistral-7B and GPT-3.5-turbo. Although the models show potential, the relatively modest 40% agreement level highlights the need for further refinement. Enhancing the accuracy, relevance, and clarity of the generated verifications is crucial to ensure greater reliability in real-world testing scenarios.
This program is tentative and subject to change.
Sat 3 MayDisplayed time zone: Eastern Time (US & Canada) change
11:00 - 12:30 | |||
11:00 30mTalk | Lachesis: Predicting LLM Inference Accuracy using Structural Properties of Reasoning Paths DeepTest Naryeong Kim Korea Advanced Institute of Science and Technology, Sungmin Kang National University of Singapore, Gabin An Roku, Shin Yoo Korea Advanced Institute of Science and Technology | ||
11:30 30mTalk | DILLEMA: Diffusion and Large Language Models for Multi-Modal Augmentation DeepTest Luciano Baresi Politecnico di Milano, Davide Yi Xian Hu Politecnico di Milano, Muhammad Irfan Mas'Udi Politecnico di Milano, Giovanni Quattrocchi Politecnico di Milano | ||
12:00 30mTalk | On the Effectiveness of LLMs for Manual Test Verifications DeepTest Myron David Peixoto Federal University of Alagoas, Davy Baía Federal University of Alagoas, Nathalia Nascimento Pennsylvania State University, Paulo Alencar University of Waterloo, Baldoino Fonseca Federal University of Alagoas, Márcio Ribeiro Federal University of Alagoas, Brazil |