ICSME 2025
Sun 7 - Fri 12 September 2025 Auckland, New Zealand

Online user feedback, like app reviews, can provide valuable insights into software product improvements, offering development teams direct insights into customer experiences, preferences, and pain points. There are many studies that have proposed promising methods to automatically prioritize online user feedback, helping development teams identify the most salient software issues that need to be addressed. However, these methods may not take into account the accessibility-related needs of end users.

Our study addresses this limitation by developing a novel approach to analyze and prioritize app store reviews that discuss accessibility concerns. This new approach involves the evaluation of seven distinct machine learning (ML) algorithms, as well as three state-of-the-art large language models (LLMs), all leveraging features of app reviews relevant to accessibility. Utilizing validated accessibility reviews, we assess the effectiveness of our proposed approach and compare its performance with a leading general prioritization tool.

The results show that our novel method surpasses the leading general tool in prioritizing accessibility reviews, achieving an F1-score of 83.6%. This represents an improvement over the prior study’s F1-score of 69.0%. Additionally, our approach outperforms the existing method across all three priority classifications, with the most notable improvement seen in the identification of high-priority reviews, where we achieved a +59.8% increase in F1-score. We hope our findings will inspire more research and innovation in this area and ultimately contribute to a more inclusive and accessible digital landscape for all users.

Thu 11 Sep

Displayed time zone: Auckland, Wellington change

15:30 - 17:00
Session 11 - Human Factors 1Journal First Track / Research Papers Track at Case Room 3 260-055
Chair(s): Gregorio Robles Universidad Rey Juan Carlos, Alexander Serebrenik Eindhoven University of Technology
15:30
15m
Characterizing the System Evolution That is Proposed After a Software Incident
Research Papers Track
Matt Pope Brigham Young University, Jonathan Sillito Brigham Young University
15:45
15m
Social Media Reactions to Open Source Promotions: AI-Powered GitHub Projects on Hacker News
Research Papers Track
Prachnachai Meakpaiboonwattana Mahidol University, Warittha Tarntong Mahidol University, Thai Mekratanavorakul Mahidol University, Chaiyong Rakhitwetsagul Mahidol University, Thailand, Pattaraporn Sangaroonsilp Mahidol University, Raula Gaikovina Kula The University of Osaka, Morakot Choetkiertikul Mahidol University, Thailand, Kenichi Matsumoto Nara Institute of Science and Technology, Thanwadee Sunetnanta Mahidol University
16:00
15m
Does Editing Improve Answer Quality on Stack Overflow? A Data-Driven Investigation
Research Papers Track
Saikat Mondal University of Saskatchewan, Chanchal K. Roy University of Saskatchewan
Pre-print
16:15
15m
Accessibility Rank: A Machine Learning Approach for Prioritizing Accessibility User Feedback
Journal First Track
Xiaoqi Chai Beihang University (Work conducted at The University of Auckland), James Tizard University of Auckland, Kelly Blincoe University of Auckland
16:30
15m
Don't Settle for the First! How Many GitHub Copilot Solutions Should You Check?
Journal First Track
Julian Oertel University of Rostock, Jil Klünder University of Applied Sciences | FHDW Hannover, Regina Hebig Universität Rostock, Rostock, Germany
16:45
15m
Adoption of Automated Software Engineering Tools and Techniques in Thailand
Journal First Track
Chaiyong Rakhitwetsagul Mahidol University, Thailand, Jens Krinke University College London, Morakot Choetkiertikul Mahidol University, Thailand, Thanwadee Sunetnanta Mahidol University, Federica Sarro University College London