Explaining Software Bugs Leveraging Code Structures in Neural Machine Translation
Over the last five decades, there has been significant research on automatically finding or correcting software bugs. However, there has been little research on automatically explaining the bugs to the developers, which is essential but a highly challenging task. In our technical research paper (accepted at ICSE 2023), we propose Bugsplainer, a transformer-based generative model, that generates natural language explanations for software bugs by learning from a large corpus of bug-fix commits. Bugsplainer leverages structural information and buggy patterns from the source code to generate an explanation for a bug. In this paper, we discuss the \textit{available} and \textit{functional} artifacts produced by our work and provide the necessary details to download and verify them.