Is Historical Data an Appropriate Benchmark for Reviewer Recommendation Systems? A Case Study of the Gerrit Community
The discipline of Mining Software Repositories (MSR) transforms the passive archives of data that accrue during software development into active, value-generating solutions, such as recommendation systems. It is customary to evaluate these solutions using held out historical data. While history-based evaluation makes pragmatic use of available data, historical records may be: (1) overly optimistic, since past recommendations may have been suboptimal choices for the task at hand; or (2) overly pessimistic, since ``incorrect'' recommendations may have been equal (or better) choices.
In this paper, we empirically evaluate the extent to which historical data is an appropriate benchmark for MSR solutions. As a concrete instance for experimentation, we use reviewer recommendation, which suggests community members to review change requests. We replicate the cHRev and WLRRec approaches and apply them to 9,679 reviews from the Gerrit open source community. We then assess the recommendations with members of the Gerrit reviewing community using quantitative (personalized questionnaires about their comfort level with tasks) and qualitative methods (semi-structured interviews).
We find that history-based evaluation is far more pessimistic than optimistic in the Gerrit context. Indeed, while 86% of those who had been assigned to a review in the past felt that they were well suited to handle the review, 74% of those labelled as incorrect recommendations also felt that they would have been comfortable reviewing the changes. This indicates that, on the one hand, when solutions recommend the past assignee, they should indeed be considered correct. Yet, on the other hand, recommendations labelled as incorrect because they do not match the past assignee may have been correct as well.
Our results suggest that current (reviewer) recommendation evaluations do not always model the reality of software development. Future studies may benefit from looking beyond repository data to gain a clearer understanding of the practical value of historical data in repository mining solutions.
Tue 16 NovDisplayed time zone: Hobart change
11:00 - 12:00 | Empirical StudiesIndustry Showcase / Research Papers / Tool Demonstrations at Koala Chair(s): Felipe Fronchetti Virginia Commonwealth University | ||
11:00 20mTalk | Is Historical Data an Appropriate Benchmark for Reviewer Recommendation Systems? A Case Study of the Gerrit Community Research Papers Ian X. Gauthier McGill University, Maxime Lamothe Polytechnique Montréal, Gunter Mussbacher McGill University, Shane McIntosh University of Waterloo | ||
11:20 20mTalk | An Empirical Study of Bugs in WebAssembly Compilers Research Papers Alan Romano University at Buffalo, Xinyue Liu University at Buffalo, SUNY, Yonghwi Kwon University of Virginia, Weihang Wang University at Buffalo, SUNY | ||
11:40 10mTalk | Improving Configurability of Unit-level Continuous Fuzzing: An Industrial Case Study with SAP HANA Industry Showcase Hanyoung Yoo Handong Global University, Jingun Hong SAP Labs, Bader Lucas SAP Labs, Dongwon Hwang SAP Labs, Shin Hong Handong Global University | ||
11:50 5mTalk | IncBL: Incremental Bug Localization Tool Demonstrations Zhou Yang Singapore Management University, Jieke Shi Singapore Management University, Shaowei Wang University of Manitoba, David Lo Singapore Management University |