Manual Tests Do Smell! Cataloging and Identifying Natural Language Test Smells
Background: Test smells indicate potential problems in the design and implementation of automated software tests that may negatively impact test code maintainability, coverage, and reliability. When poorly described, manual tests written in natural language may suffer from related problems, which enable their analysis from the point of view of test smells. Despite the possible prejudice to manually tested software products, little is known about test smells in manual tests, which results in many open questions regarding their types, frequency, and harm to tests written in natural language. Aims: Therefore, this study aims to contribute to a catalog of test smells for manual tests. Method: We perform a two-fold empirical strategy. First, an exploratory study in manual tests of three systems: the Ubuntu Operational System, the Brazilian Electronic Voting Machine, and the User Interface of a large smartphone manufacturer. We use our findings to propose a catalog of eight test smells and identification rules based on syntactical and morphological text analysis, validating our catalog with 24 in-company test engineers. Second, using our proposals, we create a tool based on Natural Language Processing (NLP) to analyze the subject systems’ tests, validating the results. Results: We observed the occurrence of eight test smells. A survey of 24 in-company test professionals showed that 80.7% agreed with our catalog definitions and examples. Our NLP-based tool achieved a precision of 92%, recall of 95%, and f-measure of 93.5%, and its execution evidenced 13,169 occurrences of our cataloged test smells in the analyzed systems. Conclusion: We contribute with a catalog of natural language test smells and novel detection strategies that better explore the capabilities of current NLP mechanisms with promising results and reduced effort to analyze tests written in different idioms.
Thu 26 OctDisplayed time zone: Central Time (US & Canada) change
13:30 - 15:05 | 2A - Software and system testingESEM Journal-First Papers / ESEM Technical Papers / Emerging Results, Vision and Reflection Papers Track / ESEM IGC at Rhythms 2 Chair(s): Davide Fucci Blekinge Institute of Technology | ||
13:30 20mFull-paper | Manual Tests Do Smell! Cataloging and Identifying Natural Language Test Smells ESEM Technical Papers Elvys Soares Federal University of Pernambuco / Federal Institute of Alagoas, Manoel Aranda III , Naelson Oliveira , Márcio Ribeiro Federal University of Alagoas, Brazil, Rohit Gheyi Federal University of Campina Grande, Emerson Paulo Soares de Souza , Ivan Machado Federal University of Bahia, Andre Santos , Baldoino Fonseca , Rodrigo Bonifácio Computer Science Department - University of Brasília Pre-print Media Attached | ||
13:50 20mFull-paper | An Empirical Study of Regression Testing for Android Apps in Continuous Integration Environment ESEM Technical Papers Dingbang Wang , Yu Zhao University of Central Missouri, Lu Xiao Stevens Institute of Technology, Tingting Yu University of Connecticut | ||
14:10 10mJournal Early-Feedback | Scripted and Scriptless GUI Testing for Web Applications: An Industrial Case ESEM Journal-First Papers Axel Bons , Beatriz Marín Universitat Politècnica de València, Pekka Aho Nordic Semiconductor, Tanja E. J. Vos | ||
14:20 15mVision and Emerging Results | Identifying Flakiness in Quantum Programs Emerging Results, Vision and Reflection Papers Track Lei Zhang , Mahsa Radnejad University of Maryland Baltimore County, Andriy Miranskyy Toronto Metropolitan University (formerly Ryerson University) | ||
14:35 15mIndustry talk | The Vocabulary of Flaky Tests in the Context of SAP HANA ESEM IGC | ||
14:50 15mIndustry talk | Comparing Mobile Testing Tools Using Documentary Analysis ESEM IGC |