EASE 2024
Tue 18 - Fri 21 June 2024 Salerno, Italy
Wed 19 Jun 2024 14:26 - 14:40 at Room Vietri - Testing Chair(s): Samira Silva

Test smells can pose difficulties during testing activities, such as poor maintainability, non-deterministic behavior, and incomplete verification. Existing research has extensively addressed test smells in automated software tests but little attention has been given to smells in natural language tests. While some research has identified and catalogued such smells, there is a lack of systematic approaches for their removal. Consequently, there is also a lack of tools to automatically identify and remove natural language test smells. This paper introduces a catalog of transformations designed to remove seven natural language test smells and a companion tool implemented using Natural Language Processing (NLP) techniques. Our work aims to enhance the quality and reliability of natural language tests during software development. The research employs a two-fold empirical strategy to evaluate its contributions. First, a survey involving 15 software testing professionals assesses the acceptance and usefulness of the catalog’s transformations. Second, an empirical study evaluates our tool to remove natural language test smells by analyzing a sample of real-practice tests from the Ubuntu OS. The results indicate that software testing professionals find the transformations valuable. Additionally, the automated tool demonstrates a good level of precision, as evidenced by a F-Measure rate of 83.70%

Wed 19 Jun

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

14:00 - 15:20
TestingResearch Papers / Short Papers, Vision and Emerging Results at Room Vietri
Chair(s): Samira Silva Gran Sasso Science Institute (GSSI)
14:00
13m
Talk
Using Large Language Models to Generate JUnit Tests: An Empirical Study
Research Papers
Mohammed Latif Siddiq University of Notre Dame, Joanna C. S. Santos University of Notre Dame, Ridwanul Hasan Tanvir Pennsylvania State University, Noshin Ulfat IQVIA Inc., Fahmid Al Rifat United International University, Vinicius Carvalho Lopes University of Notre Dame
Pre-print
14:13
13m
Talk
Mutation Testing for Task-Oriented Chatbots
Research Papers
Pablo Gómez-Abajo Universidad Autónoma de Madrid, Sara Perez-Soler Universidad Autónoma de Madrid, Pablo C Canizares Autonomous University of Madrid, Spain, Esther Guerra Universidad Autónoma de Madrid, Juan de Lara Autonomous University of Madrid
Pre-print
14:26
13m
Talk
A Catalog of Transformations to Remove Test Smells From Natural Language TestsDistinguished Paper Award
Research Papers
Manoel Aranda III Federal University of Alagoas, Naelson Oliveira Federal University of Alagoas, Elvys Soares Federal Institute of Alagoas (IFAL), Márcio Ribeiro Federal University of Alagoas, Brazil, Davi Romão Federal University of Alagoas, Ullyanne Patriota Federal University of Alagoas, Rohit Gheyi Federal University of Campina Grande, Emerson Paulo Soares de Souza Federal University of Pernambuco, Ivan Machado Federal University of Bahia
Pre-print
14:40
13m
Talk
An Empirical Study on Code Coverage of Performance Testing
Research Papers
Muhammad Imran Università degli Studi dell'Aquila, Vittorio Cortellessa University of L'Aquila, Davide Di Ruscio University of L'Aquila, Riccardo Rubei University of L'Aquila, Luca Traini University of L'Aquila
Link to publication DOI
14:53
13m
Talk
AI-Generated Test Scripts for Web E2E Testing with ChatGPT and Copilot: A preliminary study
Short Papers, Vision and Emerging Results
Maurizio Leotta DIBRIS, University of Genova, Italy, Hafiz Zeeshan Yousaf Università di Genova, Filippo Ricca Università di Genova, Boni Garcia Universidad Carlos III de Madrid
15:06
13m
Talk
Towards Predicting Fragility in End-to-End Web Tests
Short Papers, Vision and Emerging Results
Sergio Di Meglio Università degli Studi di Napoli Federico II, Luigi Libero Lucio Starace Università degli Studi di Napoli Federico II