ICFP/SPLASH 2025
Sun 12 - Sat 18 October 2025 Singapore

This program is tentative and subject to change.

While the automated detection of cryptographic API misuses has progressed significantly, its precision diminishes for intricate targets due to the reliance on manually defined patterns. Large Language Models (LLMs) offer a promising context-aware understanding to address this shortcoming, yet the stochastic nature and the hallucination issue pose challenges to their applications in precise security analysis. This paper presents the first systematic study to explore LLMs’ application in cryptographic API misuse detection. Our findings are noteworthy: The instability of directly applying LLMs results in over half of the initial reports being false positives. Despite this, the reliability of LLM-based detection could be significantly enhanced by aligning detection scopes with realistic scenarios and employing a novel code & analysis validation technique, achieving a nearly 90% detection recall. This improvement substantially surpasses traditional methods and leads to the discovery of previously unknown vulnerabilities in established benchmarks. Nevertheless, we identify recurring failure patterns that illustrate current LLMs’ blind spots, including cryptographic knowledge deficiencies and code semantics misinterpretations. Leveraging these findings, we deploy an LLM-based detection system and uncover 63 new vulnerabilities (47 confirmed, 7 already fixed) in open-source Java and Python repositories, including prominent projects like Apache.

This program is tentative and subject to change.

Wed 15 Oct

Displayed time zone: Perth change

13:40 - 15:20
LLMs for Program Analysis and Verification ILMPL at Orchid East
13:40
15m
Talk
Function Renaming in Reverse Engineering of Embedded Device Firmware with ChatGPT
LMPL
Puzhuo Liu Ant Group & Tsinghua University, Peng Di Ant Group & UNSW Sydney, Yu Jiang Tsinghua University
13:55
15m
Talk
Enhancing Semantic Understanding in Pointer Analysis Using Large Language Models
LMPL
Baijun Cheng Peking University, Kailong Wang Huazhong University of Science and Technology, Ling Shi Nanyang Technological University, Haoyu Wang Huazhong University of Science and Technology, Yao Guo Peking University, Ding Li Peking University, Xiangqun Chen Peking University
14:10
15m
Talk
Improving SAST Detection Capability with LLMs and Enhanced DFA
LMPL
Yuan Luo Tencent Security Yunding Lab, Zhaojun Chen Tencent Security Yunding Lab, Yuxin Dong Peking University, Haiquan Zhang Tencent Security Yunding Lab, Yi Sun Tencent Security Yunding Lab, Fei Xie Tencent Security Yunding Lab, Zhiqiang Dong Tencent Security Yunding Lab
14:25
15m
Talk
ClearAgent: Agentic Binary Analysis for Effective Vulnerability Detection
LMPL
Xiang Chen The Hong Kong University of Science and Technology, Anshunkang Zhou The Hong Kong University of Science and Technology, Chengfeng Ye The Hong Kong University of Science and Technology, Charles Zhang The Hong Kong University of Science and Technology
14:40
15m
Talk
CG-Bench: Can Language Models Assist Call Graph Construction in the Real World?
LMPL
Ting Yuan , Wenrui Zhang Huawei Technologies Co., Ltd, Dong Chen Huawei, Jie Wang Huawei Technologies Co., Ltd
Pre-print
14:55
20m
Talk
Beyond Static Pattern Matching? Rethinking Automatic Cryptographic API Misuse Detection in the Era of LLMs
LMPL