Continuous Test Suite Failure Prediction
Sat 17 Jul 2021 10:50 - 11:10 at ISSTA 1 - Session 27 (time band 3) Bugs and Analysis 2 Chair(s): Mike Papadakis
Continuous integration advocates to run the test suite of a project frequently, e.g., for every code change committed to a shared repository. This process imposes a high computational cost and sometimes also a high human cost, e.g., when developers must wait for the test suite to pass before a change appears in the main branch of the shared repository. However, only 4% of all test suite invocations turn a previously passing test suite into a failing test suite. The question arises whether running the test suite for each code change is really necessary. This paper presents continuous test suite failure prediction, which reduces the cost of continuous integration by predicting whether a particular code change should trigger the test suite at all. The core of the approach is a machine learning model based on features of the code change, the test suite, and the development history. We also present a theoretical cost model that describes when continuous test suite failure prediction is worthwhile. Evaluating the idea with 15k test suite runs from 242 open-source projects shows that the approach is effective at predicting whether running the test suite is likely to reveal a test failure. Moreover, we find that our approach improves the AUC over baselines that use features proposed for just-in-time defect prediction and test case failure prediction by 13.9% and 2.9%, respectively. Overall, continuous test suite failure prediction can significantly reduce the cost of continuous integration.
Sat 17 JulDisplayed time zone: Brussels, Copenhagen, Madrid, Paris change
01:10 - 02:30 | Session 21 (time band 2) Testing 3Technical Papers at ISSTA 1 Chair(s): Rohan Padhye Carnegie Mellon University | ||
01:10 20mTalk | Continuous Test Suite Failure Prediction Technical Papers DOI Media Attached | ||
01:30 20mTalk | Toward Optimal MC/DC Test Case Generation Technical Papers Sangharatna Godboley National Institute of Technology Warangal, Joxan Jaffar National University of Singapore, Rasool Maghareh Huawei, Arpita Dutta National University of Singapore DOI | ||
01:50 20mTalk | Challenges and Opportunities: An In-Depth Empirical Study on Configuration Error Injection Testing Technical Papers Wang Li National University of Defense Technology, Zhouyang Jia National University of Defense Technology, Shanshan Li National University of Defense Technology, Yuanliang Zhang National University of Defense Technology, Teng Wang National University of Defense Technology, Erci Xu National University of Defense Technology, Ji Wang National University of Defense Technology, Liao Xiangke National University of Defense Technology DOI File Attached | ||
02:10 20mTalk | Test-Case Prioritization for Configuration TestingACM SIGSOFT Distinguished Paper Technical Papers Runxiang Cheng University of Illinois at Urbana-Champaign, Lingming Zhang University of Illinois at Urbana-Champaign, Darko Marinov University of Illinois at Urbana-Champaign, Tianyin Xu University of Illinois at Urbana-Champaign DOI |
09:30 - 11:10 | Session 27 (time band 3) Bugs and Analysis 2Technical Papers at ISSTA 1 Chair(s): Mike Papadakis University of Luxembourg, Luxembourg | ||
09:30 20mTalk | Faster, Deeper, Easier: Crowdsourcing Diagnosis of Microservice Kernel Failure from User Space Technical Papers Yicheng Pan Peking University, Meng Ma Peking University, Xinrui Jiang Peking University, Ping Wang Peking University DOI Media Attached File Attached | ||
09:50 20mTalk | Finding Data Compatibility Bugs with JSON Subschema CheckingDistinguished Artifact Technical Papers Andrew Habib SnT, University of Luxembourg, Avraham Shinnar IBM Research, Martin Hirzel IBM Research, Michael Pradel University of Stuttgart Link to publication DOI Pre-print File Attached | ||
10:10 20mTalk | Semantic Table Structure Identification in Spreadsheets Technical Papers Yakun Zhang Institute of Software at Chinese Academy of Sciences; University of Chinese Academy of Sciences, Xiao Lv Microsoft Research, Haoyu Dong Microsoft Research, Wensheng Dou Institute of Software at Chinese Academy of Sciences; University of Chinese Academy of Sciences, Shi Han Microsoft Research, Dongmei Zhang Microsoft Research, Jun Wei Institute of Software at Chinese Academy of Sciences; University of Chinese Academy of Sciences, Dan Ye Institute of Software at Chinese Academy of Sciences; University of Chinese Academy of Sciences DOI Media Attached | ||
10:30 20mTalk | Deep Just-in-Time Defect Prediction: How Far Are We? Technical Papers Zhengran Zeng Southern University of Science and Technology, Yuqun Zhang Southern University of Science and Technology, Haotian Zhang Kwai, Lingming Zhang University of Illinois at Urbana-Champaign DOI | ||
10:50 20mTalk | Continuous Test Suite Failure Prediction Technical Papers DOI Media Attached |