ICSME 2025
Sun 7 - Fri 12 September 2025 Auckland, New Zealand

High-quality answers in technical Q&A platforms like Stack Overflow (SO) are crucial as they directly influence software development practices. Poor-quality answers can introduce inefficiencies, bugs, and security vulnerabilities and thus increase maintenance costs and technical debt in production software. To improve content quality, SO allows collaborative editing, where users revise answers to enhance clarity, correctness, and formatting. Several studies have examined rejected edits and identified their causes of rejection. However, prior research has not systematically assessed whether accepted edits enhance key quality dimensions. While one study investigated the impact of edits on C/C++ vulnerabilities, broader quality aspects remain unexplored. In this study, we analyze 94,994 Python-related answers that have at least one accepted edit to determine whether edits improve (1) semantic relevance, (2) code usability, (3) code complexity, (4) security vulnerabilities, (5) code optimization, and (6) readability. Our findings show both positive and negative effects of edits. While 53.3% of edits improve how well answers match questions, 38.1% make them less relevant. Some previously broken code (9%) becomes executable, yet working code (14.7%) turns non-parsable after edits. Many edits increase complexity (32.3%), making code harder to maintain. Instead of fixing security issues, 20.5% of edits introduce additional issues. Even though 51.0% of edits optimize performance, execution time still increases overall. Readability also suffers, as 49.7% of edits make code harder to read. This study highlights the inconsistencies in editing outcomes and provides insights into how edits impact software maintainability, security, and efficiency that might caution users and moderators and help future improvements in collaborative editing systems.

Thu 11 Sep

Displayed time zone: Auckland, Wellington change

15:30 - 17:00
Session 11 - Human Factors 1Journal First Track / Research Papers Track at Case Room 3 260-055
Chair(s): Gregorio Robles Universidad Rey Juan Carlos, Alexander Serebrenik Eindhoven University of Technology
15:30
15m
Characterizing the System Evolution That is Proposed After a Software Incident
Research Papers Track
Matt Pope Brigham Young University, Jonathan Sillito Brigham Young University
15:45
15m
Social Media Reactions to Open Source Promotions: AI-Powered GitHub Projects on Hacker News
Research Papers Track
Prachnachai Meakpaiboonwattana Mahidol University, Warittha Tarntong Mahidol University, Thai Mekratanavorakul Mahidol University, Chaiyong Rakhitwetsagul Mahidol University, Thailand, Pattaraporn Sangaroonsilp Mahidol University, Raula Gaikovina Kula The University of Osaka, Morakot Choetkiertikul Mahidol University, Thailand, Kenichi Matsumoto Nara Institute of Science and Technology, Thanwadee Sunetnanta Mahidol University
16:00
15m
Does Editing Improve Answer Quality on Stack Overflow? A Data-Driven Investigation
Research Papers Track
Saikat Mondal University of Saskatchewan, Chanchal K. Roy University of Saskatchewan
Pre-print
16:15
15m
Accessibility Rank: A Machine Learning Approach for Prioritizing Accessibility User Feedback
Journal First Track
Xiaoqi Chai Beihang University (Work conducted at The University of Auckland), James Tizard University of Auckland, Kelly Blincoe University of Auckland
16:30
15m
Don't Settle for the First! How Many GitHub Copilot Solutions Should You Check?
Journal First Track
Julian Oertel University of Rostock, Jil Klünder University of Applied Sciences | FHDW Hannover, Regina Hebig Universität Rostock, Rostock, Germany
16:45
15m
Adoption of Automated Software Engineering Tools and Techniques in Thailand
Journal First Track
Chaiyong Rakhitwetsagul Mahidol University, Thailand, Jens Krinke University College London, Morakot Choetkiertikul Mahidol University, Thailand, Thanwadee Sunetnanta Mahidol University, Federica Sarro University College London