Scaling Code Pattern Inference with Interactive What-If Analysis
Programmers often have to search for similar code when detecting and fixing similar bugs. Prior active learning approaches take only instance-level feedback, i.e., positive and negative method instances. This limitation leads to increased labeling burden, when users try to control generality and specificity for a desired code pattern.
We present a novel feedback-guided pattern inference approach, called SURF. To reduce users’ labelling effort, it actively guide users in assessing the implication of having a particular feature choice in the constructed pattern, and incorporates direct feature-level feedback. The key insight behind SURF is that users can effectively select appropriate features with the aid of impact analysis. SURF provides hints on the global distribution of how each feature is consistent with already labelled positive and negative instances, and how selection of a new feature can yield additional matching instances. Its what-if-analysis contrasts how different feature choices can include (or exclude) more instances in the rest of the population.
We performed a user study with 14 participants, designed with two-treatment factorial crossover. Participants were able to provide 30% more correct answers about different API usages in 20% less time. All participants found that what-if-analysis and impact analysis are useful for pattern refinement. 79% of the participants were able to produce the correct, expected pattern with SURF’s feature-level guidance, as opposed to 43% of the participants when using the baseline with instance-level feedback only. SURF is the first approach to incorporate feature-level feedback with automated what-if analysis to empower users to control the generality (/ specificity) of a desired code pattern.
Fri 19 AprDisplayed time zone: Lisbon change
14:00 - 15:30 | Evolution 5New Ideas and Emerging Results / Demonstrations / Research Track at Glicínia Quartin Chair(s): Martin Pinzger Universität Klagenfurt | ||
14:00 15mTalk | Semantic GUI Scene Learning and Video Alignment for Detecting Duplicate Video-based Bug Reports Research Track Yanfu Yan William & Mary, Nathan Cooper William & Mary, Oscar Chaparro William & Mary, Kevin Moran University of Central Florida, Denys Poshyvanyk William & Mary | ||
14:15 15mTalk | The Classics Never Go Out of Style: An Empirical Study of Downgrades from the Bazel Build Technology Research Track Pre-print | ||
14:30 15mTalk | Scaling Code Pattern Inference with Interactive What-If Analysis Research Track | ||
14:45 15mTalk | Context-Aware Name Recommendation for Field Renaming Research Track Chunhao Dong Beijing Institute of Technology, Yanjie Jiang Peking University, Nan Niu University of Cincinnati, Yuxia Zhang Beijing Institute of Technology, Hui Liu Beijing Institute of Technology | ||
15:00 7mTalk | "Don’t Touch my Model!" Towards Managing Model History and Versions during Metamodel Evolution New Ideas and Emerging Results Marcel Homolka Institute for Software Systems Engineering, Johannes Kepler University, Linz, Luciano Marchezan Johannes Kepler University Linz, Wesley Assunção North Carolina State University, Alexander Egyed Johannes Kepler University Linz Pre-print | ||
15:07 7mTalk | Challenges in Empirically Testing Memory Persistency Models New Ideas and Emerging Results Vasileios Klimis Queen Mary University of London, Alastair F. Donaldson Imperial College London, Viktor Vafeiadis MPI-SWS, John Wickerson Imperial College London, Azalea Raad Imperial College London | ||
15:14 7mTalk | AntiCopyPaster 2.0: Whitebox just-in-time code duplicates extraction Demonstrations Eman Abdullah AlOmar Stevens Institute of Technology, Benjamin Knobloch Stevens Institute of Technology, Thomas Kain Stevens Institute of Technology, Christopher Kalish Stevens Institute of Technology, Mohamed Wiem Mkaouer University of Michigan - Flint, Ali Ouni ETS Montreal, University of Quebec |