Practical Flaky Test Prediction using Common Code Evolution and Test History Data
Non-deterministically behaving test cases cause developers to lose trust in their regression test suites and to eventually ignore failures. Detecting flaky tests is therefore a crucial task in maintaining code quality, as it builds the necessary foundation for any form of systematic response to flakiness, such as test quarantining or automated debugging. Previous research has proposed various methods to detect flakiness, but when trying to deploy these in an industrial context, their reliance on instrumentation, test reruns, or language-specific artifacts was inhibitive. In this paper, we therefore investigate the prediction of flaky tests without such requirements on the underlying programming language, CI, build or test execution framework. Instead, we rely only on the most commonly available artifacts, namely the tests’ outcomes and durations, as well as basic information about the code evolution to build predictive models capable of detecting flakiness. Furthermore, our approach does not require additional reruns, since it gathers this data from existing test executions. We trained several established classifiers on the suggested features and evaluated their performance on a large-scale industrial software system, from which we collected a data set of 100 flaky and 100 non-flaky test- and code-histories. The best model was able to achieve an F1-score of 95.5% using only 3 features: the tests’ flip rates, the number of changes to source files in the last 54 days, as well as the number of changed files in the most recent pull request.
Wed 19 AprDisplayed time zone: Dublin change
14:00 - 15:40 | Session 15: Flaky TestsPrevious Editions / Research Papers at Pearse suite Chair(s): John Micco VMware | ||
14:00 20mTalk | Evaluating Features for Machine Learning Detection of Order- and Non-Order-Dependent Flaky Tests Previous Editions Owain Parry The University of Sheffield, Gregory Kapfhammer Allegheny College, Michael Hilton Carnegie Mellon University, Phil McMinn University of Sheffield DOI | ||
14:20 20mTalk | An Empirical Study of Flaky Tests in Python Previous Editions Martin Gruber BMW Group, University of Passau, Stephan Lukasczyk University of Passau, Florian Kroiß , Gordon Fraser University of Passau DOI | ||
14:40 20mTalk | A Survey on How Test Flakiness Affects Developers and What Support They Need To Address It Previous Editions DOI | ||
15:00 20mTalk | Practical Flaky Test Prediction using Common Code Evolution and Test History Data Research Papers Martin Gruber BMW Group, University of Passau, Michael Heine BMW Group; Friedrich-Alexander Universität Erlangen-Nürnberg (FAU), Programming Systems Group, Norbert Oster Friedrich-Alexander Universität Erlangen-Nürnberg (FAU), Programming Systems Group, Michael Philippsen Friedrich-Alexander Universität Erlangen-Nürnberg (FAU), Programming Systems Group, Gordon Fraser University of Passau Pre-print | ||
15:20 20mTalk | A Qualitative Study on the Sources, Impacts, and Mitigation Strategies of Flaky Tests Previous Editions Sarra Habchi Ubisoft, Guillaume Haben University of Luxembourg, Mike Papadakis University of Luxembourg, Luxembourg, Maxime Cordy University of Luxembourg, Luxembourg, Yves Le Traon University of Luxembourg, Luxembourg DOI |