ASE 2025
Sun 16 - Thu 20 November 2025 Seoul, South Korea

This program is tentative and subject to change.

Wed 19 Nov 2025 14:20 - 14:30 at Grand Hall 3 - Web & Mobile Systems 2

Testing web forms is an essential activity for ensuring the quality of web applications. It typically involves evaluating the interactions between users and forms. Automated test-case generation remains a challenge for web-form testing: Due to the complex, multi-level structure of web pages, it can be difficult to automatically capture their inherent contextual information for inclusion in the tests. Large Language Models (LLMs) have shown great potential for contextual text generation. This motivated us to explore how they could generate automated tests for web forms, making use of the contextual information within form elements. To the best of our knowledge, no comparative study examining different LLMs has yet been reported for web-form-test generation. To address this gap in the literature, we conducted a comprehensive empirical study investigating the effectiveness of 11 LLMs on 146 web forms from 30 open-source Java web applications. In addition, we propose three HTML-structure-pruning methods to extract key contextual information. The experimental results show that different LLMs can achieve different testing effectiveness, with the GPT-4, GLM-4, and Baichuan2 LLMs generating the best web-form tests. Compared with GPT-4, the other LLMs had difficulty generating appropriate tests for the web forms: Their successfully-submitted rates (SSRs) — the proportions of the LLMs-generated web-form tests that could be successfully inserted into the web forms and submitted — decreased by 9.10% to 74.15%. Our findings also show that, for all LLMs, when the designed prompts include complete and clear contextual information about the web forms, more effective web-form tests were generated. Specifically, when using Parser-Processed HTML for Task Prompt (PH-P), the SSR averaged 70.63%, higher than the 60.21% for Raw HTML for Task Prompt (RH-P) and 50.27% for LLM-Processed HTML for Task Prompt (LH-P). With RH-P, GPT-4’s SSR was 98.86%, outperforming models like LLaMa2 (7B) with 34.47% and GLM-4V with 0%. Similarly, with PH-P, GPT-4 reached an SSR of 99.54%, the highest among all models and prompt types. Finally, this paper also highlights strategies for selecting LLMs based on performance metrics, and for optimizing the prompt design to improve the quality of the web-form tests.

This program is tentative and subject to change.

Wed 19 Nov

Displayed time zone: Seoul change

14:00 - 15:30
14:00
10m
Talk
Adaptive and accessible user interfaces for seniors through model-driven engineering
Journal-First Track
Shavindra Wickramathilaka Monash University, John Grundy Monash University, Kashumi Madampe Monash University, Australia, Omar Haggag Monash University, Australia
Link to publication DOI
14:10
10m
Talk
AppBDS: LLM-Powered Description Synthesis for Sensitive Behaviors in Mobile Apps
Research Papers
Zichen Liu Arizona State University, Xusheng Xiao Arizona State University
14:20
10m
Talk
Large Language Models for Automated Web-Form-Test Generation: An Empirical Study
Journal-First Track
Tao Li Macau University of Science and Technology, Chenhui Cui Macau University of Science and Technology, Rubing Huang Macau University of Science and Technology (M.U.S.T.), Dave Towey University of Nottingham Ningbo China, Lei Ma The University of Tokyo & University of Alberta
14:30
10m
Talk
Beyond Static GUI Agent: Evolving LLM-based GUI Testing via Dynamic Memory
Research Papers
Mengzhuo Chen Institute of Software, Chinese Academy of Sciences, Zhe Liu Institute of Software, Chinese Academy of Sciences, Chunyang Chen TU Munich, Junjie Wang Institute of Software at Chinese Academy of Sciences, Yangguang Xue University of Chinese Academy of Sciences, Boyu Wu Institute of Software at Chinese Academy of Sciences, Yuekai Huang Institute of Software, Chinese Academy of Sciences, Libin Wu Institute of Software Chinese Academy of Sciences, Qing Wang Institute of Software at Chinese Academy of Sciences
14:40
10m
Talk
Who's to Blame? Rethinking the Brittleness of Automated Web GUI Testing from a Pragmatic Perspective
Research Papers
Haonan Zhang University of Waterloo, Kundi Yao University of Waterloo, Zishuo Ding The Hong Kong University of Science and Technology (Guangzhou), Lizhi Liao Memorial University of Newfoundland, Weiyi Shang University of Waterloo
14:50
10m
Talk
LLM-Cure: LLM-based Competitor User Review Analysis for Feature Enhancement
Journal-First Track
Maram Assi Université du Québec à Montréal, Safwat Hassan University of Toronto, Ying Zou Queen's University, Kingston, Ontario
15:00
10m
Talk
MIMIC: Integrating Diverse Personality Traits for Better Game Testing Using Large Language Model
Research Papers
Yifei Chen McGill University, Sarra Habchi Cohere, Canada, Lili Wei McGill University
Pre-print
15:10
10m
Talk
Debun: Detecting Bundled JavaScript Libraries on Web using Property-Order Graphs
Research Papers
Seojin Kim North Carolina State University, Sungmin Park Korea University, Jihyeok Park Korea University
15:20
10m
Talk
GUIFuzz++: Unleashing Grey-box Fuzzing on Desktop Graphical User Interfacing Applications
Research Papers
Dillon Otto University of Utah, Tanner Rowlett University of Utah, Stefan Nagy University of Utah
Pre-print