Extracting Formal Specifications from Documents Using LLMs for Test Automation
This program is tentative and subject to change.
Test automation plays a crucial role in ensuring software security. It heavily relies on formal specifications to validate the correctness of the system behavior. However, the main approach to defining these formal specifications is through manual analysis of software documents, which requires a significant amount of engineering effort from experienced researchers and engineers. Meanwhile, system update further increases the human labor cost to maintain a corresponding formal specification, making the manual analysis approach a time-consuming and error-prone task.
Recent advances in Large Language Models (LLMs) have demonstrated promising capabilities in natural language understanding. Yet, the feasibility of using LLMs to automate the extraction of formal specifications from documents remains unexplored. We conduct an empirical study by constructing a comprehensive dataset comprising 603 specifications from 37 documents across three representative open-source software. We then evaluate the most recent LLMs’ capabilities in extracting formal specifications from documents in an end-to-end fashion, including GPT-4o, Claude, and Llama.
Our study demonstrates the application of LLMs in formal specification extraction tasks while identifying two major limitations: specification oversimplification and specification fabrication. We attribute these deficiencies to the LLMs’ inherent limitations in processing and expressive capabilities, as well as their tendency to fabricate fictional information. Inspired by human cognitive processes, we propose a novel two-stage method, annotation-then-conversion, to address these challenges. Our method decomposes the task into sentence annotation and temporal logic conversion, reducing the demands on LLMs’ processing and expressive capabilities for each subtask. Furthermore, by generating verifiable sentence-specification pairs, our method enables effective fact-checking, thereby mitigating hallucination effects. Our method demonstrates significant improvements over the end-to-end method, with a 29.2% increase in the number of correctly extracted specifications and a 14.0% improvement in average accuracy. In particular, our best-performing LLM achieves an accuracy of 71.6%.
This program is tentative and subject to change.
Sun 27 AprDisplayed time zone: Eastern Time (US & Canada) change
16:00 - 17:30 | Summarisation, Natural Language GenerationResearch Track / Early Research Achievements (ERA) / Replications and Negative Results (RENE) at 205 | ||
16:00 10mTalk | Optimizing Datasets for Code Summarization: Is Code-Comment Coherence Enough? Research Track Antonio Vitale Politecnico di Torino, University of Molise, Antonio Mastropaolo William and Mary, USA, Rocco Oliveto University of Molise, Massimiliano Di Penta University of Sannio, Italy, Simone Scalabrino University of Molise | ||
16:10 10mTalk | CMDeSum: A Cross-Modal Deliberation Network for Code Summarization Research Track Zhifang Liao Central South University, Xiaoyu Liu Central South University, Peng Lan School of Computer Science and Engineering, Central South University, Changsha, China, Song Yu Central South University, Pei Liu Monash University | ||
16:20 10mTalk | CLCoSum: Curriculum Learning-based Code Summarization for Code Language Models Research Track Hongkui He South China University of Technology, Jiexin Wang South China University of Technology, Liuwen Cao South China University of Technology, Yi Cai School of Software Engineering, South China University of Technology, Guangzhou, China | ||
16:30 10mTalk | DLCoG: A Novel Framework for Dual-Level Code Comment Generation based on Semantic Segmentation and In-Context Learning Research Track Zhang Zhiyang , Haiyang Yang School of Computer Science and Engineering, Central South University, Qingyang Yan Central South University, Hao Yan Central South University, Wei-Huan Min Central South University, Zhao Wei Tencent, Li Kuang Central South University, Yingjie Xia Hangzhou Dianzi University | ||
16:40 10mTalk | Explaining GitHub Actions Failures with Large Language Models: Challenges, Insights, and Limitations Research Track Pablo Valenzuela-Toledo University of Bern, Universidad de La Frontera, Chuyue Wu University of Bern, Sandro Hernández University of Bern, Alexander Boll University of Bern, Roman Machacek University of Bern, Sebastiano Panichella University of Bern, Timo Kehrer University of Bern | ||
16:50 10mTalk | Large Language Models are Qualified Benchmark Builders: Rebuilding Pre-Training Datasets for Advancing Code Intelligence Tasks Research Track Kang Yang National University of Defense Technology, Xinjun Mao National University of Defense Technology, Shangwen Wang National University of Defense Technology, Yanlin Wang Sun Yat-sen University, Tanghaoran Zhang National University of Defense Technology, Yihao Qin National University of Defense Technology, Bo Lin National University of Defense Technology, Zhang Zhang Key Laboratory of Software Engineering for Complex Systems, National University of Defense Technology, Yao Lu National University of Defense Technology, Kamal Al-Sabahi College of Banking and Financial Studies | ||
17:00 10mTalk | Extracting Formal Specifications from Documents Using LLMs for Test Automation Research Track Hui Li Xiamen University, Zhen Dong Fudan University, Siao Wang Fudan University, Hui Zhang Fudan University, Liwei Shen Fudan University, Xin Peng Fudan University, Dongdong She HKUST (The Hong Kong University of Science and Technology) | ||
17:10 6mTalk | Using Large Language Models to Generate Concise and Understandable Test Case Summaries Early Research Achievements (ERA) Natanael Djajadi Delft University of Technology, Amirhossein Deljouyi Delft University of Technology, Andy Zaidman Delft University of Technology Pre-print | ||
17:16 6mTalk | Towards Generating the Rationale for Code Changes Replications and Negative Results (RENE) Francesco Casillo Università di Salerno, Antonio Mastropaolo William and Mary, USA, Gabriele Bavota Software Institute @ Università della Svizzera Italiana, Vincenzo Deufemia University of Salerno, Carmine Gravino University of Salerno | ||
17:22 8mTalk | Session's Discussion: "Summarisation, Natural Language Generation" Research Track |