Scalable and Accurate Test Case Prioritization in Continuous Integration Contexts
Continuous Integration (CI) requires efficient regression testing to ensure software quality without significantly delaying its CI builds. This warrants the need for techniques to reduce regression testing time, such as Test Case Prioritization (TCP) techniques that prioritize the execution of test cases to detect faults as early as possible. Many recent TCP studies employ various Machine Learning (ML) techniques to deal with the dynamic and complex nature of CI. However, most of them use a limited number of features for training ML models and evaluate the models on subjects for which the application of TCP makes little practical sense, due to their small regression testing time and low number of failed builds. In this work, we first define, at a conceptual level, a data model that captures data sources and their relations in a typical CI environment. Second, based on this data model, we define a comprehensive set of features that covers all features previously used by related studies. Third, we develop methods and tools to collect the defined features for 25 open-source software systems with enough failed builds and whose regression testing takes at least five minutes. Fourth, relying on the collected dataset containing a comprehensive feature set, we answer four research questions concerning data collection time, the effectiveness of ML-based TCP, the impact of the features on effectiveness, the decay of ML-based TCP models over time, and the trade-off between data collection time and the effectiveness of ML-based TCP techniques.
Thu 18 MayDisplayed time zone: Hobart change
13:45 - 15:15 | Test quality and improvementTechnical Track / Journal-First Papers / DEMO - Demonstrations at Meeting Room 110 Chair(s): Guowei Yang University of Queensland | ||
13:45 15mTalk | Test Selection for Unified Regression Testing Technical Track Shuai Wang University of Illinois at Urbana-Champaign, Xinyu Lian University of Illinois at Urbana-Champaign, Darko Marinov University of Illinois at Urbana-Champaign, Tianyin Xu University of Illinois at Urbana-Champaign Pre-print | ||
14:00 15mTalk | ATM: Black-box Test Case Minimization based on Test Code Similarity and Evolutionary Search Technical Track Rongqi Pan University of Ottawa, Taher A Ghaleb University of Ottawa, Lionel Briand University of Luxembourg; University of Ottawa | ||
14:15 15mTalk | Measuring and Mitigating Gaps in Structural Testing Technical Track Soneya Binta Hossain University of Virginia, Matthew B Dwyer University of Virginia, Sebastian Elbaum University of Virginia, Anh Nguyen-Tuong University of Virginia Pre-print | ||
14:30 7mTalk | FlaPy: Mining Flaky Python Tests at Scale DEMO - Demonstrations Pre-print | ||
14:37 7mTalk | Scalable and Accurate Test Case Prioritization in Continuous Integration Contexts Journal-First Papers Ahmadreza Saboor Yaraghi University of Ottawa, Mojtaba Bagherzadeh University of Ottawa, Nafiseh Kahani University of Carlton, Lionel Briand University of Luxembourg; University of Ottawa | ||
14:45 7mTalk | Flakify: A Black-Box, Language Model-based Predictor for Flaky Tests Journal-First Papers Sakina Fatima University of Ottawa, Taher A Ghaleb University of Ottawa, Lionel Briand University of Luxembourg; University of Ottawa | ||
14:52 7mTalk | Developer-centric test amplification Journal-First Papers Pre-print | ||
15:00 7mTalk | How Developers Engineer Test Cases: An Observational Study Journal-First Papers MaurĂcio Aniche Delft University of Technology, Christoph Treude University of Melbourne, Andy Zaidman Delft University of Technology Pre-print |