ICSE 2024
Fri 12 - Sun 21 April 2024 Lisbon, Portugal

Faced with over 100M open source projects, a more manageable small subset is needed for most empirical investigations. Over half of the research papers in leading venues investigated filtering projects by some measure of popularity with explicit or implicit arguments that unpopular projects are not of interest, may not even represent “real” software projects, or that less popular projects are not worthy of study. However, such filtering may have enormous effects on the results of the studies if and precisely because the sought-out response or prediction is in any way related to the filtering criteria. This paper exemplifies the impact of this common practice on research outcomes, specifically how filtering of software projects on GitHub based on inherent characteristics affects the assessment of their popularity. Using a dataset of over 100,000 repositories, we used multiple regression to model the number of stars –a commonly used proxy for popularity– based on factors such as the number of commits, the duration of the project, the number of authors and the number of core developers. Our control model included the entire dataset, while a second filtered model considered only projects with ten or more authors. The results indicated that while certain characteristics of the repository consistently predict popularity, the filtering process significantly alters the relationships between these characteristics and the response. We found that the number of commits exhibited a positive correlation with popularity in the control sample but showed a negative correlation in the filtered sample. These findings highlight the potential biases introduced by data filtering and emphasize the need for careful sample selection in empirical research of mining software repositories. We recommend that empirical work should either analyze complete datasets such as World of Code, or employ stratified random sampling from a complete dataset to ensure that filtering is not biasing the results.

Tue 16 Apr

Displayed time zone: Lisbon change

09:00 - 10:30
Session 1 - Keynote & MSR StudiesWSESE at Eugénio de Andrade
Chair(s): Andreas Jedlitschka Fraunhofer IESE
09:00
15m
Welcome
WSESE
Sira Vegas Universidad Politecnica de Madrid, Andreas Jedlitschka Fraunhofer IESE
09:15
45m
Keynote
Are we Getting Reliable Evidence? Methodology is Critical in Empirical Studies
WSESE
Natalia Juristo Universidad Politecnica de Madrid
10:00
15m
Talk
Lessons Learned from Mining the Hugging Face Repository
WSESE
Joel Castaño Fernández Universitat Politècnica de Catalunya, Silverio Martínez-Fernández UPC-BarcelonaTech, Xavier Franch Universitat Politècnica de Catalunya
Link to publication Pre-print
10:15
15m
Talk
The Role of Data Filtering in Open Source Software Ranking and Selection
WSESE
Addi Malviya-Thakur The University of Tennessee, Knoxville / Oak Ridge National Laboratory, Audris Mockus The University of Tennessee, Knoxville / Vilnius University