What Happens When We Fuzz? Investigating OSS-Fuzz Bug History
BACKGROUND: Software engineers must be vigilant in preventing and correcting vulnerabilities and other critical bugs. In servicing this need, numerous tools and techniques have been developed to assist developers. Fuzzers, by autonomously generating inputs to test programs, promise to save time by detecting memory corruption, input handling, exception cases, and other issues.
AIMS: The goal of this work is to empower developers to prioritize their quality assurance by analyzing the history of bugs generated by OSS-Fuzz. Specifically, we examined what has happened when a project adopts fuzzing as a quality assurance practice by measuring bug lifespans, learning opportunities, and bug types.
METHOD: We analyzed 44,102 reported issues made public by OSS-Fuzz prior to March 12, 2022. We traced the Git commit ranges reported by repeated fuzz testing to the source code repositories to identify how long fuzzing bugs remained in the system, who fixes these bugs, and what types of problems fuzzers historically have found. We identified the bug-contributing commits to estimate when the bug containing code was introduced, and measure the timeline from introduction to detection to fix.
RESULTS: We found that bugs detected in OSS-Fuzz have a median lifespan of 324 days, but that bugs, once detected, only remain unaddressed for a median of 2 days. Further, we found that of the 8,099 issues for which a source committing author can be identified, less than half (45.9%) of issues were fixed by the same author that introduced the bug.
CONCLUSIONS: The results show that fuzzing can be used to makes a positive impact on a project that takes advantage in terms of their ability to address bugs in a time frame conducive to fixing mistakes prior to a product release. However, the rate at which we find authors are not correcting their own errors suggests that not all developers are benefiting from the learning opportunities provided by fuzzing feedback.
Mon 15 MayDisplayed time zone: Hobart change
14:20 - 15:15 | Understanding DefectsRegistered Reports / Data and Tool Showcase Track / Technical Papers at Meeting Room 110 Chair(s): Matteo Paltenghi University of Stuttgart, Germany | ||
14:20 12mTalk | What Happens When We Fuzz? Investigating OSS-Fuzz Bug History Technical Papers Brandon Keller Rochester Institute of Technology, Benjamin S. Meyers Rochester Institute of Technology, Andrew Meneely Rochester Institute of Technology | ||
14:32 12mTalk | An Empirical Study of High Performance Computing (HPC) Performance Bugs Technical Papers Md Abul Kalam Azad University of Michigan - Dearborn, Nafees Iqbal University of Michigan - Dearborn, Foyzul Hassan University of Michigan - Dearborn, Probir Roy University of Michigan at Dearborn Pre-print | ||
14:44 6mTalk | Semantically-enriched Jira Issue Tracking Data Data and Tool Showcase Track Themistoklis Diamantopoulos Electrical and Computer Engineering Dept, Aristotle University of Thessaloniki, Dimitrios-Nikitas Nastos Electrical and Computer Engineering Dept., Aristotle University of Thessaloniki, Andreas Symeonidis Electrical and Computer Engineering Dept., Aristotle University of Thessaloniki Pre-print | ||
14:50 6mTalk | An exploratory study of bug introducing changes: what happens when bugs are introduced in open source software? Registered Reports Lukas Schulte Universitity of Passau, Anamaria Mojica-Hanke University of Passau and Universidad de los Andes, Mario Linares-Vasquez Universidad de los Andes, Steffen Herbold University of Passau | ||
14:56 6mTalk | HasBugs - Handpicked Haskell Bugs Data and Tool Showcase Track | ||
15:02 6mTalk | An Empirical Study on the Performance of Individual Issue Label Prediction Technical Papers |