Detecting and Understanding Real-World Differential Performance Bugs in Machine Learning Libraries
Programming errors that degrade the performance of systems are widespread, yet there is very little tool support for finding and diagnosing these bugs. We present a method and a tool based on differential performance analysis—we find inputs for which the performance varies widely, despite having the same size. To ensure that the differences in the performance are robust (i.e. hold also for large inputs), we compare the performance of not only single inputs, but of classes of inputs, where each class has similar inputs parameterized by their size. Thus, each class is represented by a performance function from the input size to performance. Importantly, we also provide an explanation for why the performance differs in a form that can be readily used to fix a performance bug.
The two main phases in our method are discovery with fuzzing and explanation with decision tree classifiers, each of which is supported by clustering. First, we propose an evolutionary fuzzing algorithm to generate inputs that characterize different performance functions. For this fuzzing task, the unique challenge is that we not only need the input class with the worst performance, but rather a set of classes exhibiting differential performance. We use clustering to merge similar input classes which significantly improves the efficiency of our fuzzer. Second, we explain the differential performance in terms of program inputs and internals (e.g., methods and conditions). We adapt discriminant learning approaches with clustering and decision trees to localize suspicious code regions.
We applied our techniques on a set of micro-benchmarks and real-world machine learning libraries. On a set of micro-benchmarks, we show that our approach outperforms state-of-the-art fuzzers in finding inputs to characterize differential performance. On a set of case-studies, we discover and explain multiple performance bugs in popular machine learning frameworks, for instance in implementations of logistic regression in scikit-learn. Four of these bugs, reported first in this paper, have since been fixed by the developers.
Tue 21 JulDisplayed time zone: Tijuana, Baja California change
10:50 - 11:50
MACHINE LEARNING IITechnical Papers at Zoom
Chair(s): Baishakhi Ray Columbia University, New York
Public Live Stream/Recording. Registered participants should join via the Zoom link distributed in Slack.
|Detecting and Understanding Real-World Differential Performance Bugs in Machine Learning Libraries|
Saeid Tizpaz-Niari CU Boulder/UT El Paso, Pavol Cerny TU Wien, Ashutosh TrivediLink to publication DOI Pre-print Media Attached
|Higher Income, Larger Loan? Monotonicity Testing of Machine Learning Models|
Arnab Sharma University of Paderborn, Heike Wehrheim Paderborn UniversityDOI Media Attached
|Detecting Flaky Tests in Probabilistic and Machine Learning Applications|
Saikat Dutta University of Illinois at Urbana-Champaign, USA, August Shi The University of Texas at Austin, Rutvik Choudhary , Zhekun Zhang , Aryaman Jain , Sasa Misailovic University of Illinois at Urbana-ChampaignDOI Media Attached