Automating Differential Testing with Overapproximate Symbolic Execution
Before releasing updated software, developers typically run regression test suites to verify that the code changes behave as intended and do not introduce unintended behaviors. Regression test suites, however, are often incomplete and can miss relevant software behaviors. To mitigate this issue, we introduce a technique for detecting potentially unintended behavioral changes. Given two versions of the software, our technique first finds functions that reach modified code in either version. It then performs underconstrained symbolic execution on both the original and revised programs, starting from the identified functions. If the execution reaches changed code, the technique compares corresponding program states across versions and logs differences. Because the so-generated raw log can contain a large number of redundant entries and a mixture of both intended and unintended differences, our technique clusters, orders, and ranks the logged differences to help developers identify feasible, unintended differences. We evaluated our technique by (1) applying it to a large set of known regressions, (2) comparing our results with those of a state-of-the-art approach, and (3) applying it to a popular program, Redis. Our results are promising: for the known regressions, our technique automatically identified more than half of the regressions, generated a low number of false positives, ranked the true positives above the false positives in most cases, and reported more regressions than the state-of-the-art technique we used as a baseline; for the refactored Redis versions, our technique reported no regressions.
Wed 19 AprDisplayed time zone: Dublin change
11:00 - 12:30 | Session 13: Testing with Static Analysis MethodsResearch Papers / Previous Editions at Pearse suite Chair(s): Paolo Arcaini National Institute of Informatics | ||
11:00 20mTalk | Address-Aware Query Caching for Symbolic Execution Previous Editions DOI | ||
11:20 20mTalk | Automating Differential Testing with Overapproximate Symbolic Execution Previous Editions DOI | ||
11:40 20mTalk | Two Sparsification Strategies for Accelerating Demand-Driven Pointer Analysis Research Papers Pre-print Media Attached | ||
12:00 20mTalk | Model Generation For Java Frameworks Research Papers Linghui Luo Amazon Web Services, Goran Piskachev Amazon Web Services, Ranjith Krishnamurthy Fraunhofer IEM, Julian Dolby IBM Research, Eric Bodden Heinz Nixdorf Institut, Paderborn University and Fraunhofer IEM, Martin Schäf Amazon Web Services |