Background: Many software bug prediction models have been proposed and evaluated on a set of well-known benchmark datasets. We conducted pilot studies on the widely used benchmark datasets and observed common issues among them. Specifically, most of existing benchmark datasets consist of randomly selected historical versions of software projects, which poses non-trivial threats to the validity of existing bug prediction studies since the real-world software projects often evolve continuously. Yet how to conduct software bug prediction in the real-world continuous software development scenarios is not well studied.
Aims: In this paper, to bridge the gap between current software bug prediction practice and real-world continuous software development, we propose new approaches to conduct bug prediction in real-world continuous software development regarding model building, updating, and evaluation.
Method: For model building, we propose ConBuild, which leverages distributional characteristics of bug prediction data to guide the training version selection. For model updating, we propose ConUpdate, which leverages the evolution of distributional characteristics of bug prediction data between versions to guide the reuse or update of bug prediction models in continuous software development. For model evaluation, we propose ConEA, which leverages the evolution of buggy probability of files between versions to conduct effort-aware evaluation.
Results: Experiments on 120 continuously release versions that span across six large-scale open-source software systems show the practical value of our approaches.
Conclusions: This paper provides new insights and guidelines for conducting software bug prediction in the context of continuous software development.
Thu 14 OctDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
15:30 - 16:00
|Continuous Software Bug Prediction|
Song Wang York University, Junjie Wang Institute of Software at Chinese Academy of Sciences, Jaechang Nam Handong Global University, Nachiappan Nagappan FacebookPre-print
|An Empirical Examination of the Impact of Bias on Just-in-time Defect Prediction|