Write a Blog >>
ISSTA 2020
Sat 18 - Wed 22 July 2020
Tue 21 Jul 2020 10:50 - 11:10 at Zoom - MACHINE LEARNING II Chair(s): Baishakhi Ray

Programming errors that degrade the performance of systems are widespread, yet there is very little tool support for finding and diagnosing these bugs. We present a method and a tool based on differential performance analysis—we find inputs for which the performance varies widely, despite having the same size. To ensure that the differences in the performance are robust (i.e. hold also for large inputs), we compare the performance of not only single inputs, but of classes of inputs, where each class has similar inputs parameterized by their size. Thus, each class is represented by a performance function from the input size to performance. Importantly, we also provide an explanation for why the performance differs in a form that can be readily used to fix a performance bug.

The two main phases in our method are discovery with fuzzing and explanation with decision tree classifiers, each of which is supported by clustering. First, we propose an evolutionary fuzzing algorithm to generate inputs that characterize different performance functions. For this fuzzing task, the unique challenge is that we not only need the input class with the worst performance, but rather a set of classes exhibiting differential performance. We use clustering to merge similar input classes which significantly improves the efficiency of our fuzzer. Second, we explain the differential performance in terms of program inputs and internals (e.g., methods and conditions). We adapt discriminant learning approaches with clustering and decision trees to localize suspicious code regions.

We applied our techniques on a set of micro-benchmarks and real-world machine learning libraries. On a set of micro-benchmarks, we show that our approach outperforms state-of-the-art fuzzers in finding inputs to characterize differential performance. On a set of case-studies, we discover and explain multiple performance bugs in popular machine learning frameworks, for instance in implementations of logistic regression in scikit-learn. Four of these bugs, reported first in this paper, have since been fixed by the developers.

Tue 21 Jul
Times are displayed in time zone: Tijuana, Baja California change

10:50 - 11:50: MACHINE LEARNING IITechnical Papers at Zoom
Chair(s): Baishakhi RayColumbia University, New York

Public Live Stream/Recording. Registered participants should join via the Zoom link distributed in Slack.

10:50 - 11:10
Detecting and Understanding Real-World Differential Performance Bugs in Machine Learning LibrariesArtifacts AvailableArtifacts Evaluated – Functional
Technical Papers
Saeid Tizpaz-NiariCU Boulder/UT El Paso, Pavol CernyTU Wien, Ashutosh Trivedi
Link to publication DOI Pre-print Media Attached
11:10 - 11:30
Higher Income, Larger Loan? Monotonicity Testing of Machine Learning Models
Technical Papers
Arnab SharmaUniversity of Paderborn, Heike WehrheimPaderborn University
DOI Media Attached
11:30 - 11:50
Detecting Flaky Tests in Probabilistic and Machine Learning Applications
Technical Papers
Saikat DuttaUniversity of Illinois at Urbana-Champaign, USA, August ShiThe University of Texas at Austin, Rutvik Choudhary, Zhekun Zhang, Aryaman Jain, Sasa MisailovicUniversity of Illinois at Urbana-Champaign
DOI Media Attached