Test Generation Strategies for Building Failure Models and Explaining Spurious Failures
Test inputs fail not only when the system under test is faulty but also when the inputs are invalid or unrealistic. Failures resulting from invalid or unrealistic test inputs are spurious. Avoiding spurious failures improves the effectiveness of testing in exercising the main functions of a system, particularly for compute-intensive (CI) systems where a single test execution takes significant time. In this article, we propose to build failure models for inferring interpretable rules on test inputs that cause spurious failures. We examine two alternative strategies for building failure models: (1) machine learning (ML)-guided test generation and (2) surrogate-assisted test generation. ML-guided test generation infers boundary regions that separate passing and failing test inputs and samples test inputs from those regions. Surrogate-assisted test generation relies on surrogate models to predict labels for test inputs instead of exercising all the inputs. We propose a novel surrogate-assisted algorithm that uses multiple surrogate models simultaneously, and dynamically selects the prediction from the most accurate model. We empirically evaluate the accuracy of failure models inferred based on surrogate-assisted and ML-guided test generation algorithms. Using case studies from the domains of cyber-physical systems and networks, we show that our proposed surrogate-assisted approach generates failure models with an average accuracy of 83%, significantly outperforming ML-guided test generation and two baselines. Further, our approach learns failure-inducing rules that identify genuine spurious failures as validated against domain knowledge.
Thu 1 MayDisplayed time zone: Eastern Time (US & Canada) change
14:00 - 15:30 | AI for Testing and QA 4Journal-first Papers / Demonstrations / Research Track at 206 plus 208 Chair(s): Andreas Jedlitschka Fraunhofer IESE | ||
14:00 15mTalk | The Seeds of the FUTURE Sprout from History: Fuzzing for Unveiling Vulnerabilities in Prospective Deep-Learning LibrariesSecurityAward Winner Research Track Zhiyuan Li , Jingzheng Wu Institute of Software, The Chinese Academy of Sciences, Xiang Ling Institute of Software, Chinese Academy of Sciences, Tianyue Luo Institute of Software, Chinese Academy of Sciences, ZHIQING RUI Institute of Software, Chinese Academy of Sciences; University of Chinese Academy of Sciences, Yanjun Wu Institute of Software, Chinese Academy of Sciences | ||
14:15 15mTalk | AutoRestTest: A Tool for Automated REST API Testing Using LLMs and MARL Demonstrations Tyler Stennett Georgia Institute of Technology, Myeongsoo Kim Georgia Institute of Technology, Saurabh Sinha IBM Research, Alessandro Orso Georgia Institute of Technology | ||
14:30 15mTalk | FairBalance: How to Achieve Equalized Odds With Data Pre-processing Journal-first Papers Zhe Yu Rochester Institute of Technology, Joymallya Chakraborty Amazon.com, Tim Menzies North Carolina State University | ||
14:45 15mTalk | RLocator: Reinforcement Learning for Bug Localization Journal-first Papers Partha Chakraborty University of Waterloo, Mahmoud Alfadel University of Calgary, Mei Nagappan University of Waterloo | ||
15:00 15mTalk | Studying the explanations for the automated prediction of bug and non-bug issues using LIME and SHAP Journal-first Papers Lukas Schulte University of Passau, Benjamin Ledel Digital Learning GmbH, Steffen Herbold University of Passau | ||
15:15 15mTalk | Test Generation Strategies for Building Failure Models and Explaining Spurious Failures Journal-first Papers Baharin Aliashrafi Jodat University of Ottawa, Abhishek Chandar University of Ottawa, Shiva Nejati University of Ottawa, Mehrdad Sabetzadeh University of Ottawa Pre-print |