A Method for Regression Testing Plan Ordering for Non-Automated Executions in Black Box Testing
Regression testing is a crucial part of the software quality assurance process. It is characterized by the fact that, as the scope of the tested software expands, the test suite to be executed upon each update also grows. This not only increases the cost of performing these tests but also means that, depending on the order of tests execution, identifying new bugs can take more or less time, affecting the bug-fixing time and the eventual release of the update. Therefore, it is essential to plan which test cases are executed and the order of their execution. In this paper, we propose a method for prioritizing regression test cases based on machine learning techniques, aiming to prioritize test cases with a higher probability of leading to software execution failures. To achieve this, we employ techniques based on neural networks to represent both categorical and textual features. In terms of textual features, they are represented by SentenceBERT, a large language model focused on text chain representation. Our experiments show that the proposed method achieves results equal to or better than those of human experts at 92.52% to 94.24% of scenarios when evaluating the APFD metric. These results lead to potential gains of up to 6.03% in test plan prioritization counting cases and nearly 10% in mean APFD gain when using failure probability as the prioritization criterion.