Productive Coverage: Improving the Actionability of Code Coverage
Code coverage is an intuitive and widely-used test adequacy measure. Established coverage measures treat each test goal (e.g., statement or branch) as equally important, and code-coverage adequacy requires every test goal to be covered. However, this is in contrast to how code coverage is used in practice. As a result, simply visualizing uncovered code is not actionable, and developers have to manually reason which uncovered code is critical to cover with tests and which code can be left untested. To make code coverage more actionable and further improve coverage in our codebase, we developed Productive Coverage — a novel approach to code coverage that guides developers to uncovered code that should be tested by (unit) tests. Specifically, Productive Coverage identifies uncovered code that is similar to existing tested and/or frequently in production executed code. We implemented and evaluated Productive Coverage for four programming languages (C++, Java, Go, and Python), and our evaluation shows: (1) The developer sentiment, measured at the point of use, is strongly positive; (2) Productive Coverage meaningfully increases test quality, compared to a strong baseline; (3) Productive Coverage has no negative effect on code authoring efficiency; (4) Productive Coverage modestly improves code-review efficiency; (5) Productive Coverage improves code quality and prevents defects from being introduced into the code.
Wed 17 AprDisplayed time zone: Lisbon change
14:00 - 15:30 | Testing 2Research Track / Software Engineering Education and Training / Software Engineering in Practice / Demonstrations / Journal-first Papers at Eugénio de Andrade Chair(s): Jonathan Bell Northeastern University | ||
14:00 15mTalk | Ripples of a Mutation — An Empirical Study of Propagation Effects in Mutation Testing Research Track Hang Du University of California at Irvine, Vijay Krishna Palepu Microsoft, James Jones University of California at Irvine DOI | ||
14:15 15mTalk | Fast Deterministic Black-box Context-free Grammar Inference Research Track Mohammad Rifat Arefin The University of Texas at Arlington, Suraj Shetiya University of Texas at Arlington, Zili Wang Iowa State University, Christoph Csallner University of Texas at Arlington Pre-print Media Attached | ||
14:30 15mTalk | Bridging Theory to Practice in Software Testing Teaching through Team-based Learning (TBL) and Open Source Software (OSS) Contribution Software Engineering Education and Training | ||
14:45 15mTalk | Productive Coverage: Improving the Actionability of Code Coverage Software Engineering in Practice Marko Ivanković Google; Universität Passau, Goran Petrović Google Inc, Yana Kulizhskaya Google Inc, Mateusz Lewko Google Inc, Luka Kalinovčić No affiliation, René Just University of Washington, Gordon Fraser University of Passau | ||
15:00 15mTalk | Taming Timeout Flakiness: An Empirical Study of SAP HANA Software Engineering in Practice Pre-print | ||
15:15 7mTalk | Testing Abstractions for Cyber-Physical Control Systems Journal-first Papers Claudio Mandrioli University of Luxembourg, Max Nyberg Carlsson Lund University, Martina Maggio Saarland University, Germany / Lund University, Sweden Pre-print | ||
15:22 7mTalk | FaultFuzz: A Coverage Guided Fault Injection Tool for Distributed Systems Demonstrations Wenhan Feng Institute of Software, Chinese Academy of Sciences, Qiugen Pei Joint Laboratory on Cyberspace Security China Southern Power Grid, Yu Gao Institute of Software, Chinese Academy of Sciences, China, Dong Wang Institute of software, Chinese academy of sciences, Wensheng Dou Institute of Software Chinese Academy of Sciences, Jun Wei Institute of Software at Chinese Academy of Sciences; University of Chinese Academy of Sciences; University of Chinese Academy of Sciences Chongqing School, Zheheng Liang Joint Laboratory on Cyberspace Security of China Southern Power Grid, Zhenyue Long Joint Laboratory on Cyberspace Security China Southern Power Grid Pre-print |