Detection and Elimination of Systematic Labeling Bias in Code Reviewer Recommendation Systems
Reviewer selection in modern code review is crucial for effective code reviews. Several techniques exist for recommending reviewers appropriate for a given pull request (PR). Most code reviewer recommendation techniques in the literature build and evaluate their models based on datasets collected from real projects using open-source or industrial practices. The techniques invariably presume that these datasets reliably represent the "ground truth. " In the context of a classification problem, ground truth refers to the objectively correct labels of a class used to build models from a dataset or evaluate a model’s performance. In a project dataset used to build a code reviewer recommendation system, the recommended code reviewer picked for a PR is usually assumed to be the best code reviewer for that PR. However, in practice, the recommended code reviewer may not be the best possible code reviewer, or even a qualified one. Recent code reviewer recommendation studies suggest that the datasets used tend to suffer from systematic labeling bias, making the ground truth unreliable. Therefore, models and recommendation systems built on such datasets may perform poorly in real practice. In this study, we introduce a novel approach to automatically detect and eliminate systematic labeling bias in code reviewer recommendation systems. The bias that we remove results from selecting reviewers that do not ensure a permanently successful fix for a bug-related PR. To demonstrate the effectiveness of our approach, we evaluated it on two open-source project datasets-HIVE and QT Creator-and with five code reviewer recommendation techniques-Profile-Based, RSTrace, Naive Bayes, k-NN, and Decision Tree. Our debiasing approach appears promising since it improved the Mean Reciprocal Rank (MRR) of the evaluated techniques up to 26% in the datasets used.
Wed 23 JunDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
13:00 - 14:30 | Software Quality EASE 2021 / Vision and Emerging Results Track at Zoom Chair(s): Irit Hadar University of Haifa | ||
13:00 22mFull-paper | Detection and Elimination of Systematic Labeling Bias in Code Reviewer Recommendation Systems EASE 2021 K. Ayberk Tecimer Technical University of Munich, Eray Tüzün Bilkent University, Hamdi Dibeklioğlu Bilkent University, Hakan Erdogmus Carnegie Mellon University Pre-print | ||
13:22 22mFull-paper | From Blackboard to the Office: A Look into how Practitioners Perceive Software Testing Education EASE 2021 Luana Martins Federal University of Bahia, Vinícius Brito Federal University of Bahia, Daniela Feitosa Federal University of Bahia, Larissa Rocha Federal University of Bahia / State University of Feira de Santana, Heitor Augustus Xavier Costa Federal University of Lavras, Ivan Machado Federal University of Bahia Pre-print | ||
13:45 22mFull-paper | Measurement-based Experiments on the Mobile Web: A Systematic Mapping Study EASE 2021 Pre-print | ||
14:07 22mVision and Emerging Results | Open Data-driven Usability Improvements of Static Code Analysis and its Challenges Vision and Emerging Results Track Emma Söderberg Lund University, Luke Church University of Cambridge | Lund University | Lark Systems, Martin Höst Lund University Pre-print |