Write a Blog >>
Wed 12 Oct 2022 16:40 - 17:00 at Banquet A - Technical Session 18 - Testing II Chair(s): Darko Marinov

Mutation testing research has indicated that a major part of its application cost is due to the large number of low utility mutants that it introduces. Although previous research has identified this issue, no previous study has proposed any effective solution to the problem. Thus, it remains unclear how to mutate and test a given piece of code in a best effort way, i.e., achieving a good trade-off between invested effort and test effectiveness. To achieve this, we propose Cerebro, a machine learning approach that statically selects subsuming mutants, i.e., the set of mutants that resides on the top of the subsumption hierarchy, based on the mutants’ surrounding code context. We evaluate Cerebro using 48 and 10 programs written in C and Java, respectively, and demonstrate that it preserves the mutation testing benefits while limiting application cost, i.e., reduces all cost application factors such as equivalent mutants, mutant executions, and the mutants requiring analysis. We demonstrate that Cerebro has strong inter-project prediction ability, which is significantly higher than two baseline methods, i.e., supervised learning on features proposed by state-of-the-art, and random mutant selection. More importantly, our results show that Cerebro’s selected mutants lead to strong tests that are respectively capable of killing 2 times higher than the number of subsuming mutants killed by the baselines when selecting the same number of mutants. At the same time, Cerebro reduces the cost-related factors, as it selects, on average, 68% fewer equivalent mutants, while requiring 90% fewer test executions than the baselines.

Wed 12 Oct

Displayed time zone: Eastern Time (US & Canada) change

16:00 - 18:00
Technical Session 18 - Testing IIResearch Papers / Tool Demonstrations / Journal-first Papers at Banquet A
Chair(s): Darko Marinov University of Illinois at Urbana-Champaign
16:00
10m
Demonstration
Shibboleth: Hybrid Patch Correctness Assessment in Automated Program Repair
Tool Demonstrations
Ali Ghanbari Iowa State University, Andrian Marcus University of Texas at Dallas
16:10
20m
Research paper
Auto Off-Target: Enabling Thorough and Scalable Testing for Complex Software Systems
Research Papers
Tomasz Kuchta Samsung Electronics, Bartosz Zator Samsung Electronics
DOI Pre-print
16:30
10m
Demonstration
Maktub: Lightweight Robot System Test Creation and Automation
Tool Demonstrations
Amr Moussa North Carolina State University, John-Paul Ore North Carolina State University
16:40
20m
Paper
Cerebro: Static Subsuming Mutant Selection
Journal-first Papers
Aayush Garg University of Luxembourg, Milos Ojdanic University of Luxembourg, Renzo Degiovanni SnT, University of Luxembourg, Thierry Titcheu Chekam SES S.A. & University of Luxembourg (SnT), Mike Papadakis University of Luxembourg, Luxembourg, Yves Le Traon University of Luxembourg, Luxembourg
Link to publication DOI
17:00
20m
Research paper
Natural Test Generation for Precise Testing of Question Answering SoftwareVirtual
Research Papers
Qingchao Shen Tianjin University, Junjie Chen Tianjin University, Jie M. Zhang King's College London, Haoyu Wang College of Intelligence and Computing, Tianjin University, Shuang Liu Tianjin University, Menghan Tian College of Intelligence and Computing, Tianjin University
Pre-print
17:20
20m
Paper
GloBug: Using global data in Fault LocalizationVirtual
Journal-first Papers
Nima Miryeganeh University of Calgary, Sepehr Hashtroudi University of Calgary, Hadi Hemmati University of Calgary
Link to publication DOI
17:40
20m
Research paper
Selectively Combining Multiple Coverage Goals in Search-Based Unit Test GenerationVirtual
Research Papers
Zhichao Zhou School of Information Science and Technology, ShanghaiTech University, Yuming Zhou Nanjing University, Chunrong Fang Nanjing University, Zhenyu Chen Nanjing University, Yutian Tang ShanghaiTech University
DOI Pre-print