ASE 2025
Sun 16 - Thu 20 November 2025 Seoul, South Korea

This program is tentative and subject to change.

Wed 19 Nov 2025 16:10 - 16:20 at Grand Hall 3 - Requirement Engineering

Large language models (LLMs) have become essential tools in software development, widely used for requirements engineering, code generation and review tasks. Software engineers often rely on LLMs to assess whether system code implementation satisfy task requirements, thereby enhancing code robustness and accuracy. However, it remains unclear whether LLMs can reliably determine whether the code complies fully with the given task descriptions, which is usually natural language specifications. In this paper, we uncover a systematic failure of LLMs in evaluating whether code aligns with natural language requirements. Specifically, with widely used benchmarks, we employ unified prompts to judge code correctness. Our results reveal that LLMs frequently misclassify correct code implementations as either ``not satisfying requirements'' or containing potential defects. Surprisingly, more complex prompting, especially when leveraging prompt engineering techniques involving explanations and proposed corrections, leads to higher misjudgment rate, which highlights the critical reliability issues in using LLMs as code review assistants. We further analyze the root causes of these misjudgments, and propose two improved prompting strategies for mitigation. For the first time, our findings reveals unrecognized limitations in LLMs to match code with requirements. We also offer novel insights and practical guidance for effective use of LLMs in automated code review and task-oriented agent scenarios.

This program is tentative and subject to change.

Wed 19 Nov

Displayed time zone: Seoul change

16:00 - 16:50
Requirement EngineeringNIER Track / Industry Showcase at Grand Hall 3
16:00
10m
Talk
Envisioning Intelligent Requirements Engineering via Knowledge-Guided Multi-Agent Collaboration
NIER Track
Jiangping Huang Chongqing University of Posts and Telecommunications, Dongming Jin Peking University, China, Weisong Sun Nanyang Technological University, Yang Liu Nanyang Technological University, Zhi Jin Peking University
16:10
10m
Talk
Uncovering Systematic Failures of LLMs in Verifying Code Against Natural Language Specifications
NIER Track
Haolin Jin The University of Sydney, Huaming Chen The University of Sydney
16:20
10m
Talk
Multi-Modal Requirements Data-based Acceptance Criteria Generation using LLMs
Industry Showcase
Fanyu Wang Monash University, Chetan Arora Monash University, Yonghui Liu Australian National University, Kaicheng Huang Monash University, Kla Tantithamthavorn Monash University and Atlassian, Aldeida Aleti Monash University, Dishan Sambathkumar eSolutions, Monash University, David Lo Singapore Management University
16:30
10m
Talk
Detecting and Repairing Incomplete Software Requirements with Multi-LLM Ensembles
NIER Track
Mohamad Kassab Boston University, USA, Marwan AbdElhameed New York University Abu Dhabi
16:40
10m
Talk
Linguistic Theories Coincide with Misformalization in Temporal Logic
NIER Track
Colin Gordon Drexel University