ASE 2025
Sun 16 - Thu 20 November 2025 Seoul, South Korea

This program is tentative and subject to change.

Tue 18 Nov 2025 15:20 - 15:30 at Grand Hall 1 - Program Analysis 1

Semantic code clone detection is the task of detecting whether two snippets of code implement the same functionality (e.g., Sort Array). Recently, many neural models achieved near- perfect performance on this task. These models seek to make inferences based on their training data. Consequently, they better detect clones similar to those they have seen during training and may struggle to detect those they have not. Developers seeking clones are, of course, interested in both sorts of clones. We confirm this claim with a literature review, finding three practical clone detection tasks where the model’s goal is to detect clones of a functionality even if it was trained on clones of different functionalities. In light of this finding, we re-evaluate six state-of-the-art models, including both task-specific models and generative LLMs, on the task of detecting clones of unseen functionality. Our experiments reveal a drop in F1 of up to 48% (average 31%) for task-specific models. LLMs perform on par with task-specific models without explicit training for clone detection, but generalize better to unseen functionalities, where F1 drops up to 5% (average 3%) instead. We propose and evaluate the use of contrastive learning to improve the performance of existing models on clones of unseen functionality. We draw inspiration from the computer vision and natural language processing fields where contrastive learning excels at measuring similarity between two objects, even if they come from classes unseen during training. We replace the final classifier of the task-specific models with a contrastive classifier, while for the generative LLMs we propose contrastive in-context learning, which guides the LLMs to focus on the differences between clones and non-clones. The F1 on clones of unseen functionality is improved by up to 26% (average 9%) for task- specific models and up to 5% (average 3%) for LLMs.

This program is tentative and subject to change.

Tue 18 Nov

Displayed time zone: Seoul change

14:00 - 15:30
14:00
10m
Talk
Exploring Static Taint Analysis in LLMs: A Dynamic Benchmarking Framework for Measurement and Enhancement
Research Papers
Haoran Zhao Fudan University, Lei Zhang Fudan University, Keke Lian Fudan University, Fute Sun Fudan University, Bofei Chen Fudan University, Yongheng Liu Fudan University, Zhiyu Wu Fudan University, Yuan Zhang Fudan University, Min Yang Fudan University
14:10
10m
Talk
EPSO: A Caching-Based Efficient Superoptimizer for BPF Bytecode
Research Papers
Qian Zhu Nanjing University, Yuxuan Liu Nanjing University, Ziyuan Zhu Nanjing University, Shangqing Liu Nanjing University, Lei Bu Nanjing University
14:20
10m
Talk
GNNContext: GNN-based Code Context Prediction for Programming Tasks
Journal-First Track
Xiaoye Zheng Zhejiang University, Zhiyuan Wan Zhejiang University, Shun Liu Zhejiang University, Kaiwen Yang Zhejiang University, David Lo Singapore Management University, Xiaohu Yang Zhejiang University
14:30
10m
Talk
R3-Bench: Reproducible Real-world Reverse Engineering Dataset for Symbol Recovery
Research Papers
Muzhi Yu Peking University and Alibaba Group, Zhengran Zeng Peking University, Wei Ye Peking University, Jinan Sun Peking University, Xiaolong Bai Alibaba Group, Shikun Zhang Peking University
14:40
10m
Talk
Protecting Source Code Privacy When Hunting Memory Bugs
Research Papers
Jielun Wu Nanjing University, Bing Shui Nanjing University, Hongcheng Fan Nanjing University, Shengxin Wu Nanjing University, Rongxin Wu Xiamen University, Yang Feng Nanjing University, Baowen Xu Nanjing University, Qingkai Shi Nanjing University
14:50
10m
Talk
Latra: A Template-Based Language-Agnostic Transformation Framework for Effective Program Reduction
Research Papers
Zhenyang Xu University of Waterloo, Yiran Wang University of Waterloo, Yongqiang Tian Monash University, Mengxiao Zhang University of Waterloo, Chengnian Sun University of Waterloo
15:00
10m
Talk
When Control Flows Deviate: Directed Grey-box Fuzzing with Probabilistic Reachability Analysis
Research Papers
Peihong Lin National University of Defense Technology, Pengfei Wang National University of Defense Technology, Xu Zhou National University of Defense Technology, Wei Xie University of Science and Technology of China, Xin Ren National University of Defense Technology, Kai Lu National University of Defense Technology, China
15:10
10m
Talk
EditFusion: Resolving Code Merge Conflicts via Edit Selection
Research Papers
Changxin Wang Nanjing University, Yiming Ma Nanjing University, Lei Xu Nanjing University, Weifeng Zhang Nanjing University of Posts and Telecommunications
15:20
10m
Talk
Detecting Semantic Clones of Unseen Functionality
Research Papers
Konstantinos Kitsios University of Zurich, Francesco Sovrano Collegium Helveticum, ETH Zurich, Switzerland; Department of Informatics, University of Zurich, Switzerland, Earl T. Barr University College London, Alberto Bacchelli University of Zurich
Pre-print