What Do Users Ask in Open-Source AI Repositories? An Empirical Study of GitHub Issues
Artiﬁcial intelligence (AI) systems, which beneﬁt from the availability of large-scale datasets and increasing computational power, have become effective solutions to various critical tasks, such as natural language understanding, speech recognition, and image processing. The advancement of these AI systems are inseparable from open-source software (OSS). Specifically, many benchmarks, implementations, and frameworks for constructing AI systems are made open source and accessible to the general public, allowing researchers and practitioners to reproduce the reported results and broaden the application of AI systems. The development of AI systems follows a data-driven paradigm and is sensitive to hyperparameter settings and data separation. Developers may encounter unique problems when employing open-source AI repositories.
This paper presents an empirical study that investigates the issues in the repositories of open-source AI repositories to assist developers in understanding problems during the process of employing AI systems. We collect 576 repositories from the PapersWithCode platform. Among these repositories, we ﬁnd 24,953 issues by utilizing GitHub REST APIs. Our empirical study includes three phases. First, we manually analyze these issues to categorize the problems that developers are likely to encounter in open-source AI repositories. Speciﬁcally, we provide a taxonomy of 13 categories related to AI systems. The two most common issues are runtime errors (23.18%) and unclear instructions (19.53%). Second, we see that 67.5% of issues are closed. We also ﬁnd that half of these issues resolve within four days. Moreover, issue management features, i.e., label and assign, are not widely adopted in the open-source AI repositories. In particular, only 7.81% and 5.9% of repositories label issues and assign these issues to assignees, respectively. Finally, we empirically show that employing GitHub issue management features and writing issues with detailed descriptions facilitate the resolution of issues. Based on our ﬁndings, we make recommendations for developers to help better manage the issues of open-source AI repositories and improve their quality.
Mon 15 MayDisplayed time zone: Hobart change
11:50 - 12:35
|Evaluating Software Documentation Quality
|What Do Users Ask in Open-Source AI Repositories? An Empirical Study of GitHub Issues
|PICASO: Enhancing API Recommendations with Relevant Stack Overflow Posts
|GIRT-Data: Sampling GitHub Issue Report Templates
Data and Tool Showcase Track
Nafiseh Nikehgbal Sharif University of Technology, Amir Hossein Kargaran LMU Munich, Abbas Heydarnoori Bowling Green State University, Hinrich Schütze LMU MunichPre-print