TCSE logo 
 Sigsoft logo
Sustainability badge

This program is tentative and subject to change.

Fri 2 May 2025 14:00 - 14:15 at 205 - Testing and QA 5 Chair(s): Giovanni Denaro

Mutation testing was proposed to identify weaknesses in test suites by repeatedly generating artificially faulty versions of the software (i.e., mutants) and determining if the test suite is sufficient to detect them (i.e., kill them). When the tests are insufficient, each surviving mutant provides an opportunity to improve the test suite. We conducted a study and found that many such surviving mutants (up to 84% for the subjects of our study) are detectable by simply augmenting existing tests with additional assertions, or assertion amplification. Moreover, we find that many of these mutants are detectable by multiple existing tests, giving developers options for how to detect them. To help with these challenges, we created a technique that performs memory-state analysis to identify candidate assertions that developers can use to detect the surviving mutants. Additionally, we build upon prior research that identifies “crossfiring” opportunities – tests that coincidentally kill multiple mutants. To this end, we developed a theoretical model that describes the varying granularities that crossfiring can occur in the existing test suite, which provide opportunities and options for how to kill surviving mutants. We operationalize this model to an accompanying technique that optimizes the assertion amplification of the existing tests to crossfire multiple mutants with fewer added assertions, optionally concentrated within fewer tests. Our experiments show that we can kill all surviving mutants that are detectable with existing test data with only 1.1% of the identified assertion candidates, and increasing by a factor of 6x, on average, the number of killed mutants from amplified tests, over tests that do not crossfire.

This program is tentative and subject to change.

Fri 2 May

Displayed time zone: Eastern Time (US & Canada) change

14:00 - 15:30
Testing and QA 5Research Track / Journal-first Papers / New Ideas and Emerging Results (NIER) / Demonstrations at 205
Chair(s): Giovanni Denaro University of Milano - Bicocca
14:00
15m
Talk
Leveraging Propagated Infection to Crossfire MutantsArtifact-FunctionalArtifact-Available
Research Track
Hang Du University of California at Irvine, Vijay Krishna Palepu Microsoft, James Jones University of California at Irvine
14:15
15m
Talk
IFSE: Taming Closed-box Functions in Symbolic Execution via Fuzz Solving
Demonstrations
Qichang Wang East China Normal University, Chuyang Chen The Ohio State University, Ruiyang Xu East China Normal University, Haiying Sun East China Normal University, Chengcheng Wan East China Normal University, Ting Su East China Normal University, Yueling Zhang East China Normal University, Geguang Pu East China Normal University, China
14:30
15m
Talk
Takuan: Using Dynamic Invariants To Debug Order-Dependent Flaky Tests
New Ideas and Emerging Results (NIER)
Nate Levin Yorktown High School, Chengpeng Li University of Texas at Austin, Yule Zhang George Mason University, August Shi The University of Texas at Austin, Wing Lam George Mason University
14:45
15m
Talk
Vision Transformer Inspired Automated Vulnerability RepairSecurity
Journal-first Papers
Michael Fu The University of Melbourne, Van Nguyen Monash University, Kla Tantithamthavorn Monash University, Dinh Phung Monash University, Australia, Trung Le Monash University, Australia
15:00
15m
Talk
ZigZagFuzz: Interleaved Fuzzing of Program Options and Files
Journal-first Papers
Ahcheong Lee KAIST, Youngseok Choi KAIST, Shin Hong Chungbuk National University, Yunho Kim Hanyang University, Kyutae Cho LIG Nex1 AI R&D, Moonzoo Kim KAIST / VPlusLab Inc.
15:15
15m
Talk
Reducing the Length of Field-replay Based Load Testing
Journal-first Papers
Yuanjie Xia University of Waterloo, Lizhi Liao Memorial University of Newfoundland, Jinfu Chen Wuhan University, Heng Li Polytechnique Montréal, Weiyi Shang University of Waterloo
:
:
:
: