Write a Blog >>
ICSE 2023
Sun 14 - Sat 20 May 2023 Melbourne, Australia
Fri 19 May 2023 11:00 - 11:15 at Meeting Room 102 - Developers' forums Chair(s): Omar Haggag

The content quality of shared knowledge in Stack Overflow (SO) is crucial in supporting software developers with their programming problems. Thus, SO allows its users to suggest edits to improve the quality of a post (i.e., question and answer). However, existing research shows that many suggested edits in SO are rejected due to undesired contents/formats or violating edit guidelines. Such a scenario frustrates or demotivates users who would like to conduct good-quality edits. Therefore, our research focuses on assisting SO users by offering them suggestions on how to improve their editing of posts. First, we manually investigate 764 (382 questions + 382 answers) rejected edits by rollbacks and produce a catalog of 19 rejection reasons. Second, we extract 15 texts and user-based features to capture those rejection reasons. Third, we develop four machine learning models using those features. Our best-performing model can predict rejected edits with 69.1% precision, 71.2% recall, 70.1% F1-score, and 69.8% overall accuracy. Fourth, we introduce an online tool named EditEx that works with the SO edit system. EditEx can assist users while editing posts by suggesting the potential causes of rejections. We recruit 20 participants to assess the effectiveness of EditEx. Half of the participants (i.e., treatment group) use EditEx and another half (i.e., control group) use the SO standard edit system to edit posts. According to our experiment, EditEx can support SO standard edit system to prevent 49% of rejected edits, including the commonly rejected ones. However, it can prevent 12% rejections even in free-form regular edits. The treatment group finds the potential rejection reasons identified by EditEx influential. Furthermore, the median workload suggesting edits using EditEx is half compared to the SO edit system.

Fri 19 May

Displayed time zone: Hobart change

11:00 - 12:30
11:00
15m
Talk
Automatic prediction of rejected edits in Stack Overflow
Journal-First Papers
Saikat Mondal University of Saskatchewan, Gias Uddin University of Calgary, Canada, Chanchal K. Roy University of Saskatchewan
Link to publication DOI Pre-print
11:15
15m
Talk
Automated Summarization of Stack Overflow Posts
Technical Track
Bonan Kou Purdue University, Muhao Chen University of Southern California, Tianyi Zhang Purdue University
11:30
15m
Talk
Semi-Automatic, Inline and Collaborative Web Page Code Curations
Technical Track
Roy Rutishauser University of Zurich, André N. Meyer University of Zurich, Reid Holmes University of British Columbia, Thomas Fritz University of Zurich
11:45
15m
Talk
You Don’t Know Search: Helping Users Find Code by Automatically Evaluating Alternative Queries
SEIP - Software Engineering in Practice
Rijnard van Tonder Sourcegraph
12:00
7m
Talk
TECHSUMBOT: A Stack Overflow Answer Summarization Tool for Technical Query
DEMO - Demonstrations
Chengran Yang Singapore Management University, Bowen Xu Singapore Management University, Jiakun Liu Singapore Management University, David Lo Singapore Management University
12:07
8m
Talk
An empirical study of question discussions on Stack Overflow
Journal-First Papers
Wenhan Zhu University of Waterloo, Haoxiang Zhang Centre for Software Excellence at Huawei Canada, Ahmed E. Hassan Queen’s University, Michael W. Godfrey University of Waterloo, Canada
12:15
15m
Talk
Faster or Slower? Performance Mystery of Python Idioms Unveiled with Empirical Evidence
Technical Track
zejun zhang Australian National University, Zhenchang Xing , Xin Xia Huawei, Xiwei (Sherry) Xu CSIRO’s Data61, Liming Zhu CSIRO’s Data61, Qinghua Lu CSIRO’s Data61