FSE 2025
Mon 23 - Fri 27 June 2025 Trondheim, Norway
co-located with ISSTA 2025
Dates
Mon 23 Jun 2025
Tue 24 Jun 2025
Wed 25 Jun 2025
Tracks
FSE Catering
FSE Demonstrations
FSE Ideas, Visions and Reflections
FSE Industry Mentoring Symposium
FSE Industry Papers
FSE Journal First
FSE Plenary Events
FSE Research Papers
FSE Software Engineering Education
FSE Student Research Competition
FSE Tutorials
Plenary
Hide plenary sessions
You're viewing the program in a time zone which is different from your device's time zone change time zone

Mon 23 Jun

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

10:30 - 12:30
RE and DesignResearch Papers / Demonstrations / Journal First / Industry Papers at Andromeda
Chair(s): Ipek Ozkaya Carnegie Mellon University
10:50
20m
Talk
Incorporating Verification Standards for Security Requirements Generation from Functional Specifications
Research Papers
Xiaoli Lian Beihang University, China, Shuaisong Wang Beihang University, Hanyu Zou Beihang University, Fang Liu Beihang University, Jiajun Wu Beihang University, Li Zhang Beihang University
DOI
12:10
20m
Talk
Unlocking Optimal ORM Database Designs: Accelerated Tradeoff Analysis with Transformers
Research Papers
Md Rashedul Hasan University of Nebraska-Lincoln, Mohammad Rashedul Hasan University of Nebraska-Lincoln, Hamid Bagheri University of Nebraska-Lincoln
DOI Pre-print
10:30 - 12:30
10:50
20m
Talk
The Struggles of LLMs in Cross-lingual Code Clone Detection
Research Papers
Micheline Bénédicte MOUMOULA University of Luxembourg, Abdoul Kader Kaboré University of Luxembourg, Jacques Klein University of Luxembourg, Tegawendé F. Bissyandé University of Luxembourg
DOI
11:10
20m
Talk
Clone Detection for Smart Contracts: How Far Are We?
Research Papers
Zuobin Wang Zhejiang University, Zhiyuan Wan Zhejiang University, Yujing Chen Zhejiang University, Yun Zhang Hangzhou City University, David Lo Singapore Management University, Difan Xie Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security, Xiaohu Yang Zhejiang University
DOI
11:50
20m
Talk
An Empirical Study of Code Clones from Commercial AI Code Generators
Research Papers
Weibin Wu Sun Yat-sen University, Haoxuan Hu Sun Yat-sen University, China, Zhaoji Fan Sun Yat-sen University, Yitong Qiao Sun Yat-sen University, China, Yizhan Huang The Chinese University of Hong Kong, Yichen LI The Chinese University of Hong Kong, Zibin Zheng Sun Yat-sen University, Michael Lyu Chinese University of Hong Kong
DOI
10:30 - 12:20
Bug DetectionResearch Papers / Industry Papers / Demonstrations / Journal First at Aurora B
Chair(s): Lingming Zhang University of Illinois at Urbana-Champaign
11:20
20m
Talk
Detecting Metadata-Related Bugs in Enterprise Applications
Research Papers
Md Mahir Asef Kabir Virginia Tech, Xiaoyin Wang University of Texas at San Antonio, Na Meng Virginia Tech
DOI
11:40
20m
Talk
ROSCallBaX: Statically Detecting Inconsistencies In Callback Function Setup of Robotic Systems
Research Papers
Sayali Kate Purdue University, Yifei Gao Purdue University, Shiwei Feng Purdue University, Xiangyu Zhang Purdue University
DOI
12:00
20m
Talk
Enhancing Web Accessibility: Automated Detection of Issues with Generative AI
Research Papers
Ziyao He University of California, Irvine, Syed Fatiul Huq University of California, Irvine, Sam Malek University of California at Irvine
DOI
10:30 - 12:30
10:30
20m
Talk
On-Demand Scenario Generation for Testing Automated Driving Systems
Research Papers
Songyang Yan Xi'an Jiaotong University, Xiaodong Zhang Xidian University, Kunkun Hao Synkrotron, Inc., Haojie Xin Xi'an Jiaotong University, Yonggang Luo Chongqing Changan Automobile Co. Ltd, Jucheng Yang Chongqing Changan Automobile Co. Ltd, Ming Fan Xi'an Jiaotong University, Chao Yang Xidian University, Jun Sun Singapore Management University, Zijiang Yang University of Science and Technology of China and Synkrotron, Inc.
DOI Pre-print
10:50
20m
Talk
Multi-Modal Traffic Scenario Generation for Autonomous Driving System Testing
Research Papers
Zhi Tu Purdue University, Liangkun Niu Purdue University, Wei Fan Purdue University, Tianyi Zhang Purdue University
DOI Pre-print
11:40
20m
Talk
A Comprehensive Study of Bug-Fix Patterns in Autonomous Driving Systems
Research Papers
Yuntianyi Chen University of California, Irvine, Yuqi Huai University of California, Irvine, Yirui He University of California, Irvine, Shilong Li University of California, Irvine, Changnam Hong University of California, Irvine, Alfred Chen University of California, Irvine, Joshua Garcia University of California, Irvine
DOI Pre-print
10:30 - 12:30
Vulnerability 1Research Papers / Ideas, Visions and Reflections / Journal First at Cosmos 3C
Chair(s): Cuiyun Gao Harbin Institute of Technology, Shenzhen
10:30
20m
Talk
VulPA: Detecting Semantically Recurring Vulnerabilities with Multi-Object Typestate Analysis
Research Papers
Liqing Cao Institute of Computing Technology at Chinese Academy of Sciences; University of Chinese Academy of Sciences, Haofeng Li SKLP, Institute of Computing Technology, CAS, Chenghang Shi SKLP, Institute of Computing Technology, CAS, Jie Lu SKLP, Institute of Computing Technology, CAS, China; University of Chinese Academy of Sciences, China, Haining Meng SKLP, Institute of Computing Technology, CAS, China; University of Chinese Academy of Sciences, China, Lian Li Institute of Computing Technology at Chinese Academy of Sciences; University of Chinese Academy of Sciences, Jingling Xue University of New South Wales
DOI
10:50
20m
Talk
Mystique: Automated Vulnerability Patch Porting with Semantic and Syntactic-Enhanced LLM
Research Papers
Susheng Wu Fudan University, Ruisi Wang Fudan University, Bihuan Chen Fudan University, Zhuotong Zhou Fudan University, Yiheng Huang Fudan University, JunPeng Zhao Fudan University, Xin Peng Fudan University
DOI
11:30
20m
Talk
Code Change Intention, Development Artifact and History Vulnerability: Putting Them Together for Vulnerability Fix Detection by LLM
Research Papers
Xu Yang University of Manitoba, Wenhan Zhu Huawei Canada, Michael Pacheco Centre for Software Excellence, Huawei, Jiayuan Zhou Huawei, Shaowei Wang University of Manitoba, Xing Hu Zhejiang University, Kui Liu Huawei
DOI
12:00
20m
Talk
Teaching AI the ‘Why’ and ‘How’ of Software Vulnerability Fixes
Research Papers
Amiao Gao Department of Computer Science, Southern Methodist University, Dallas, Texas, USA 75275-0122, Zenong Zhang The University of Texas - Dallas, Simin Wang Department of Computer Science, Southern Methodist University, Dallas, Texas, USA 75275-0122, LiGuo Huang Dept. of Computer Science, Southern Methodist University, Dallas, TX, 75205, Shiyi Wei University of Texas at Dallas, Vincent Ng Human Language Technology Research Institute, University of Texas at Dallas, Richardson, TX 75083-0688
DOI
10:30 - 12:30
Test GenerationResearch Papers / Industry Papers at Cosmos Hall
Chair(s): Michael Pradel University of Stuttgart
10:30
20m
Talk
CoverUp: Effective High Coverage Test Generation for Python
Research Papers
Juan Altmayer Pizzorno University of Massachusetts Amherst, Emery D. Berger University of Massachusetts Amherst and Amazon Web Services
DOI Pre-print
11:00
20m
Talk
Doc2OracLL: Investigating the Impact of Documentation on LLM-based Test Oracle Generation
Research Papers
Soneya Binta Hossain University of Virginia, Raygan Taylor Dillard University, Matthew B Dwyer University of Virginia
DOI
11:20
20m
Talk
Less is More: On the Importance of Data Quality for Unit Test Generation
Research Papers
Junwei Zhang Zhejiang University, Xing Hu Zhejiang University, Shan Gao Huawei, Xin Xia Zhejiang University, David Lo Singapore Management University, Shanping Li Zhejiang University
DOI
10:30 - 12:30
PerformanceDemonstrations / Research Papers / Ideas, Visions and Reflections / Journal First / Industry Papers at Vega
Chair(s): Philipp Leitner Chalmers | University of Gothenburg
10:50
20m
Talk
Understanding Debugging as Episodes: A Case Study on Performance Bugs in Configurable Software Systems
Research Papers
Max Weber Leipzig University, Alina Mailach Leipzig University, Sven Apel Saarland University, Janet Siegmund Chemnitz University of Technology, Raimund Dachselt Technical University of Dresden, Norbert Siegmund Leipzig University
DOI
11:10
20m
Talk
Towards Understanding Performance Bugs in Popular Data Science Libraries
Research Papers
Haowen Yang The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Zhengda Li The Chinese University of Hong Kong, Shenzhen, Zhiqing Zhong The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Xiaoying Tang hinese University of Hong Kong, Shenzhen, Pinjia He Chinese University of Hong Kong, Shenzhen
DOI
12:10
20m
Talk
COFFE: A Code Efficiency Benchmark for Code Generation
Research Papers
Yun Peng The Chinese University of Hong Kong, Jun Wan Zhejiang University, Yichen LI The Chinese University of Hong Kong, Xiaoxue Ren Zhejiang University
DOI
14:00 - 15:30
LoggingResearch Papers / Journal First at Andromeda
Chair(s): Domenico Bianculli University of Luxembourg
14:00
20m
Talk
No More Labelled Examples? An Unsupervised Log Parser with LLMs
Research Papers
Junjie Huang The Chinese University of Hong Kong, Zhihan Jiang The Chinese University of Hong Kong, Zhuangbin Chen Sun Yat-sen University, Michael Lyu Chinese University of Hong Kong
DOI
14:40
20m
Talk
Protecting Privacy in Software Logs: What Should be Anonymized?
Research Papers
Roozbeh Aghili Polytechnique Montréal, Heng Li Polytechnique Montréal, Foutse Khomh Polytechnique Montréal
DOI
14:00 - 15:30
Code SearchResearch Papers / Journal First / Ideas, Visions and Reflections at Aurora A
Chair(s): Xin Xia Zhejiang University
14:00
20m
Talk
10 years later: revisiting how developers search for code
Research Papers
Kathryn Stolee North Carolina State University, Tobias Welp Google, Caitlin Sadowski , Sebastian Elbaum University of Virginia
DOI
14:40
20m
Talk
Zero-Shot Cross-Domain Code Search without Fine-Tuning
Research Papers
Keyu Liang Zhejiang University, Zhongxin Liu Zhejiang University, Chao Liu Chongqing University, Zhiyuan Wan Zhejiang University, David Lo Singapore Management University, Xiaohu Yang Zhejiang University
DOI
15:10
20m
Talk
MiSum: Multi-Modality Heterogeneous Code Graph Learning for Multi-Intent Binary Code Summarization
Research Papers
Kangchen Zhu National university of Defense Technology, Zhiliang Tian National University of Defense Technology, Shangwen Wang National University of Defense Technology, Weiguo Chen National University of Defense Technology, Zixuan Dong National University of Defense Technology, mingyue leng National University of Defense Technology, Xiaoguang Mao National University of Defense Technology
DOI
14:00 - 15:20
Testing 1Journal First / Industry Papers / Research Papers at Aurora B
Chair(s): Jialun Cao Hong Kong University of Science and Technology
14:00
20m
Talk
Automated Soap Opera Testing Directed by LLMs and Scenario Knowledge: Feasibility, Challenges, and Road Ahead
Research Papers
Yanqi Su Australian National University, Zhenchang Xing CSIRO's Data61, Chong Wang Nanyang Technological University, Chunyang Chen TU Munich, Xiwei (Sherry) Xu Data61, CSIRO, Qinghua Lu Data61, CSIRO, Liming Zhu CSIRO’s Data61
DOI
14:00 - 15:20
Program Analysis 1Industry Papers / Research Papers at Cosmos 3A
Chair(s): Shiyi Wei University of Texas at Dallas
14:00
20m
Talk
Dynamic Taint Tracking for Modern Java Virtual Machines
Research Papers
Katherine Hough Northeastern University, Jonathan Bell Northeastern University
DOI
14:40
20m
Talk
An Empirical Study of Suppressed Static Analysis Warnings
Research Papers
Huimin Hu University of Stuttgart, Yingying Wang University of British Columbia, Julia Rubin The University of British Columbia, Michael Pradel University of Stuttgart
DOI
15:00
20m
Talk
A New Approach to Evaluating Nullability Inference Tools
Research Papers
Nima Karimipour University of California, Riverside, Erfan Arvan New Jersey Institute of Technology, Martin Kellogg New Jersey Institute of Technology, Manu Sridharan University of California at Riverside
DOI
14:00 - 15:30
Fuzzing 1Demonstrations / Research Papers / Journal First at Cosmos 3C
Chair(s): Shin Hwei Tan Concordia University
14:00
20m
Talk
Liberating libraries through automated fuzz driver generation: Striking a Balance Without Consumer Code
Research Papers
Flavio Toffalini EPFL, Switzerland and Ruhr-Universität Bochum, Germany, Nicolas Badoux EPFL, Zurab Tsinadze EPFL, Mathias Payer EPFL
DOI
14:40
20m
Talk
MendelFuzz: The Return of the Deterministic Stage
Research Papers
Han Zheng EPFL, Flavio Toffalini EPFL, Switzerland and Ruhr-Universität Bochum, Germany, Marcel Böhme MPI for Security and Privacy, Mathias Payer EPFL
DOI
14:00 - 15:20
14:10
20m
Talk
A Knowledge Enhanced Large Language Model for Bug Localization
Research Papers
Yue Li Nanjing University, Bohan Liu Nanjing University, Ting Zhang Singapore Management University, Zhiqi Wang Nanjing University, David Lo Singapore Management University, Lanxin Yang Nanjing University, Jun Lyu Nanjing University, He Zhang Nanjing University
DOI
14:00 - 15:30
BugsResearch Papers / Industry Papers / Ideas, Visions and Reflections at Pirsenteret 150
Chair(s): Ying Zou Queen's University, Kingston, Ontario
14:00
20m
Talk
Dissecting Real-World Cross-Language Bugs
Research Papers
Haoran Yang Washington State University, Haipeng Cai University at Buffalo, SUNY
DOI
14:20
20m
Talk
Towards Understanding Fine-Grained Programming Mistakes and Fixing Patterns in Data Science
Research Papers
Weihao Chen Purdue University, Jia Lin Cheoh Purdue University, Manthan Keim Purdue University, Sabine Brunswicker Purdue University, Tianyi Zhang Purdue University
DOI
14:40
20m
Talk
Error Delayed is Not Error Handled: Understanding and Fixing Propagated Error-Handling Bugs
Research Papers
Haoran Liu National University of Defense Technology, Shanshan Li National University of Defense Technology, Zhouyang Jia National University of Defense Technology, Yuanliang Zhang National University of Defense Technology, Linxiao Bai National University of Defense Technology, Si Zheng National University of Defense Technology, Xiaoguang Mao National University of Defense Technology, Liao Xiangke National University of Defense Technology
DOI
16:00 - 18:00
RepairsResearch Papers / Journal First at Andromeda
Chair(s): Michael Pradel University of Stuttgart
16:00
20m
Talk
HornBro: Homotopy-like Method for Automated Quantum Program Repair
Research Papers
Siwei Tan Zhejiang University, Liqiang Lu Zhejiang University, Debin Xiang Zhejiang University, Tianyao Chu Zhejiang University, Congliang Lang Zhejiang University, Jintao Chen Zhejiang University, Xing Hu Zhejiang University, Jianwei Yin Zhejiang University
DOI
16:20
20m
Talk
RePurr: Automated Repair of Block-Based Learners' Programs
Research Papers
Sebastian Schweikl University of Passau, Gordon Fraser University of Passau
DOI
16:40
20m
Talk
Demystifying Memorization in LLM-based Program Repair via a General Hypothesis Testing Framework
Research Papers
Jiaolong Kong Singapore Management University, Xiaofei Xie Singapore Management University, Shangqing Liu Nanyang Technological University
DOI
17:00
20m
Talk
IRepair: An Intent-Aware Approach to Repair Data-Driven Errors in Large Language Models
Research Papers
Sayem Mohammad Imtiaz Iowa State University, Astha Singh Dept. of Computer Science, Iowa State University, Fraol Batole Tulane University, Hridesh Rajan Tulane University
DOI
17:40
20m
Talk
Element-Based Automated DNN Repair with Fine-Tuned Masked Language Model
Research Papers
Xu Wang Beihang University; Zhongguancun Laboratory; Ministry of Education, Mingming Zhang Beihang University, Xiangxin Meng Beihang University, Jian Zhang Nanyang Technological University, Yang Liu Nanyang Technological University, Chunming Hu Beihang University
DOI
16:00 - 18:00
17:40
20m
Talk
Mitigating Emergent Malware Label Noise in DNN-Based Android Malware Detection
Research Papers
haodong li Beijing University of Posts and Telecommunications, Xiao Cheng UNSW, Guohan Zhang Beijing University of Posts and Telecommunications, Guosheng Xu Beijing University of Posts and Telecommunications, Guoai Xu Harbin Institute of Technology, Shenzhen, Haoyu Wang Huazhong University of Science and Technology
DOI
16:00 - 18:00
16:20
20m
Talk
SemBIC: Semantic-aware Identification of Bug-inducing Commits
Research Papers
Xiao Chen The Hong Kong University of Science and Technology, Hengcheng Zhu The Hong Kong University of Science and Technology, Jialun Cao Hong Kong University of Science and Technology, Ming Wen Huazhong University of Science and Technology, Shing-Chi Cheung Hong Kong University of Science and Technology
DOI
16:00 - 18:00
Testing 2Journal First / Research Papers at Cosmos 3A
Chair(s): Miryung Kim UCLA and Amazon Web Services
16:40
20m
Talk
VLATest: Testing and Evaluating Vision-Language-Action Models for Robotic Manipulation
Research Papers
Zhijie Wang University of Alberta, Zhehua Zhou University of Alberta, Canada, Jiayang Song University of Alberta, Yuheng Huang The University of Tokyo, Zhan Shu University of Alberta, Lei Ma The University of Tokyo & University of Alberta
DOI Pre-print
17:40
20m
Talk
UnitCon: Synthesizing Targeted Unit Tests for Java Runtime Exceptions
Research Papers
Sujin Jang KAIST, Yeonhee Ryou KAIST, Heewon Lee KAIST, Korea, South (The Republic of), Kihong Heo KAIST
DOI
16:00 - 17:50
Code Generation 1Industry Papers / Demonstrations / Research Papers / Journal First at Cosmos 3C
Chair(s): Zhongxin Liu Zhejiang University
16:00
20m
Talk
How Do Programming Students Use Generative AI?
Research Papers
Christian Rahe University of Hamburg, Walid Maalej University of Hamburg
DOI Pre-print
16:50
20m
Talk
DeclarUI: Bridging Design and Development with Automated Declarative UI Code Generation
Research Papers
Ting Zhou Huazhong University of Science and Technology, Yanjie Zhao Huazhong University of Science and Technology, Xinyi Hou Huazhong University of Science and Technology, Xiaoyu Sun Australian National University, Australia, Kai Chen Huazhong University of Science and Technology, Haoyu Wang Huazhong University of Science and Technology
DOI
16:00 - 18:00
16:10
20m
Talk
Automatically Detecting Numerical Instability in Machine Learning Applications via Soft Assertions
Research Papers
Shaila Sharmin Iowa State University, Anwar Hossain Zahid Iowa State University, Subhankar Bhattacharjee Iowa State University, Chiamaka Igwilo Iowa State University, Miryung Kim UCLA and Amazon Web Services, Wei Le Iowa State University
DOI
17:10
20m
Talk
Has My Code Been Stolen for Model Training? A Naturalness Based Approach to Code Contamination Detection
Research Papers
Haris Ali Khan Beijing Institute of Technology, Yanjie Jiang Peking University, Qasim Umer Information and Computer Science Department, King Fahd University of Petroleum & Minerals (KFUPM), Dhahran 31261, Saudi Arabia, Yuxia Zhang Beijing Institute of Technology, Waseem Akram Beijing Institute of Technology, Hui Liu Beijing Institute of Technology
DOI
17:30
20m
Talk
AlphaTrans: A Neuro-Symbolic Compositional Approach for Repository-Level Code Translation and Validation
Research Papers
Ali Reza Ibrahimzada University of Illinois Urbana-Champaign, Kaiyao Ke University of Illinois Urbana-Champaign, Mrigank Pawagi Indian Institute of Science, Bengaluru, Muhammad Salman Abid Cornell University, Rangeet Pan IBM Research, Saurabh Sinha IBM Research, Reyhaneh Jabbarvand University of Illinois at Urbana-Champaign
DOI Pre-print Media Attached

Tue 24 Jun

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

10:30 - 12:30
Architecture, Services, and CloudIndustry Papers / Demonstrations / Research Papers / Ideas, Visions and Reflections at Andromeda
Chair(s): Paris Avgeriou University of Groningen, The Netherlands
11:10
20m
Talk
TracePicker: Optimization-based Trace Sampling for Microservice-based Systems
Research Papers
Shuaiyu Xie School of Computer Science, Wuhan University, China, Jian Wang Wuhan University, Maodong Li School of Computer Science, Wuhan University, China, Peiran Chen School of Computer Science, Wuhan University, China, Jifeng Xuan Wuhan University, Bing Li Wuhan University
DOI
10:30 - 12:20
Code Review, Build, and ReleaseIdeas, Visions and Reflections / Industry Papers / Demonstrations / Research Papers / Journal First at Aurora A
Chair(s): Peter Rigby Concordia University; Meta
11:40
20m
Talk
CXXCrafter: An LLM-Based Agent for Automated C/C++ Open Source Software Building
Research Papers
Zhengmin Yu Fudan University, Yuan Zhang Fudan University, Ming Wen Huazhong University of Science and Technology, Yinan Nie Fudan University, Zhang Wenhui Fudan University, Min Yang Fudan University
DOI
12:00
20m
Talk
SmartNote: An LLM-Powered, Personalised Release Note Generator That Just Works
Research Papers
Farbod Daneshyan Peking University, Runzhi He Peking University, Jianyu Wu Peking University, Minghui Zhou Peking University
DOI
10:30 - 12:30
SecurityJournal First / Research Papers / Industry Papers at Aurora B
Chair(s): Zhenchang Xing CSIRO’s Data61; Australian National University
11:10
20m
Talk
Understanding Industry Perspectives of Static Application Security Testing (SAST) Evaluation
Research Papers
Yuan Li Zhejiang University, Peisen Yao Zhejiang University, Kan Yu Ant Group, Chengpeng Wang Hong Kong University of Science and Technology, Yaoyang Ye Zhejiang University, Song Li The State Key Laboratory of Blockchain and Data Security, Zhejiang University, Meng Luo The State Key Laboratory of Blockchain and Data Security, Zhejiang University, Yepang Liu Southern University of Science and Technology, Kui Ren Zhejiang University
DOI
11:50
20m
Talk
It’s Acting Odd! Exploring Equivocal Behaviors of Goodware
Research Papers
Gregorio Dalia University of Sannio, Andrea Di Sorbo University of Sannio, Corrado A. Visaggio University of Sannio, Italy, Gerardo Canfora University of Sannio
DOI
12:10
20m
Talk
On the Unnecessary Complexity of Names in X.509 and Their Impact on Implementations
Research Papers
Yuteng Sun The Chinese University of Hong Kong, Joyanta Debnath Stony Brook University, Wenzheng Hong Independent, Omar Chowdhury Stony Brook University, Sze Yiu Chau The Chinese University of Hong Kong
DOI
10:30 - 12:30
Verification and ValidationDemonstrations / Ideas, Visions and Reflections / Research Papers / Journal First at Cosmos 3A
Chair(s): Alex Orso Georgia Institute of Technology
11:10
20m
Talk
Scene Flow Specifications: Encoding and Monitoring Rich Temporal Safety Properties of Autonomous Systems
Research Papers
Trey Woodlief University of Virginia, United States, Felipe Toledo , Matthew B Dwyer University of Virginia, Sebastian Elbaum University of Virginia
DOI
11:30
20m
Talk
QSF: Multi-Objective Optimization based Efficient Solving for Floating-Point Constraints
Research Papers
Xu Yang College of Computer Science and Technology, National University of Defense Technology, Zhenbang Chen College of Computer, National University of Defense Technology, Wei Dong National University of Defense Technology, Ji Wang National University of Defense Technology
DOI
12:10
20m
Talk
ChangeGuard: Validating Code Changes via Pairwise Learning-Guided Execution
Research Papers
Lars Gröninger University of Stuttgart, Beatriz Souza Universität Stuttgart, Michael Pradel University of Stuttgart
DOI
10:30 - 12:30
12:10
20m
Talk
Hallucination Detection in Large Language Models with Metamorphic Relations
Research Papers
Borui Yang Beijing University of Posts ad Telecommunications, Md Afif Al Mamun University of Calgary, Jie M. Zhang King's College London, Gias Uddin York University, Canada
DOI
10:30 - 12:30
Code Generation 2Research Papers / Journal First at Cosmos Hall
Chair(s): Reyhaneh Jabbarvand University of Illinois at Urbana-Champaign
11:10
20m
Talk
Divide-and-Conquer: Generating UI Code from Screenshots
Research Papers
Yuxuan Wan The Chinese University of Hong Kong, Chaozheng Wang The Chinese University of Hong Kong, Yi Dong The Chinese University of Hong Kong, Wenxuan Wang Chinese University of Hong Kong, Shuqing Li The Chinese University of Hong Kong, Yintong Huo Singapore Management University, Michael Lyu Chinese University of Hong Kong
DOI
11:30
20m
Talk
LLM-based Method Name Suggestion with Automatically Generated Context-Rich Prompts
Research Papers
Waseem Akram Beijing Institute of Technology, Yanjie Jiang Peking University, Yuxia Zhang Beijing Institute of Technology, Haris Ali Khan Beijing Institute of Technology, Hui Liu Beijing Institute of Technology
DOI
11:50
20m
Talk
Beyond Functional Correctness: Investigating Coding Style Inconsistencies in Large Language Models
Research Papers
Yanlin Wang Sun Yat-sen University, Tianyue Jiang Sun Yat-sen University, Mingwei Liu Sun Yat-Sen University, Jiachi Chen Sun Yat-sen University, Mingzhi Mao Sun Yat-sen University, Xilin Liu Huawei Cloud, Yuchi Ma Huawei Cloud Computing Technologies, Zibin Zheng Sun Yat-sen University
DOI
10:30 - 12:30
Vulnerability 2Research Papers / Demonstrations at Pirsenteret 150
Chair(s): Xiaoxue Ren Zhejiang University
10:30
20m
Talk
Statement-level Adversarial Attack on Vulnerability Detection Models via Out-Of-Distribution Features
Research Papers
Xiaohu Du Huazhong University of Science and Technology, Ming Wen Huazhong University of Science and Technology, Haoyu Wang , Zichao Wei Huazhong University of Science and Technology, Hai Jin Huazhong University of Science and Technology
DOI
10:50
20m
Talk
Large Language Models for In-File Vulnerability Localization can be “Lost in the End”
Research Papers
Francesco Sovrano Collegium Helveticum, ETH Zurich, Switzerland; Department of Informatics, University of Zurich, Switzerland, Adam Bauer University of Zurich, Alberto Bacchelli University of Zurich
DOI
11:10
20m
Talk
One-for-All Does Not Work! Enhancing Vulnerability Detection by Mixture-of-Experts (MoE)
Research Papers
Xu Yang University of Manitoba, Shaowei Wang University of Manitoba, Jiayuan Zhou Huawei, Wenhan Zhu Huawei Canada
DOI
11:30
20m
Talk
Gleipner: A Benchmark for Gadget Chain Detection in Java Deserialization Vulnerabilities
Research Papers
Bruno Kreyssig Umeå University, Alexandre Bartel Umeå University
DOI
12:00
20m
Talk
Today's cat is tomorrow's dog: accounting for time-based changes in the labels of ML vulnerability detection approaches
Research Papers
Ranindya Paramitha University of Trento, Yuan Feng , Fabio Massacci University of Trento; Vrije Universiteit Amsterdam
DOI Pre-print
10:30 - 12:20
Blockchain and Smart ContractIdeas, Visions and Reflections / Research Papers at Vega
Chair(s): Cuiyun Gao Harbin Institute of Technology, Shenzhen
10:40
20m
Talk
LookAhead: Preventing DeFi Attacks via Unveiling Adversarial Contracts
Research Papers
Shoupeng Ren Zhejiang University, Lipeng He University of Waterloo, Tianyu Tu Zhejiang University, Di Wu Zhejiang University, Jian Liu Zhejiang University, Kui Ren Zhejiang University, Chun Chen Zhejiang University
DOI Pre-print
11:00
20m
Talk
SmartShot: Hunt Hidden Vulnerabilities in Smart Contracts using Mutable Snapshots
Research Papers
Ruichao Liang Wuhan University, Jing Chen Wuhan University, Ruochen Cao Wuhan University, Kun He Wuhan University, Ruiying Du Wuhan University, Shuhua Li Wuhan University, Zheng Lin University of Hong Kong, Cong Wu Wuhan University
DOI
11:20
20m
Talk
Automated and Accurate Token Transfer Identification and Its Applications in Cryptocurrency Security
Research Papers
Shuwei Song University of Electronic Science and Technology of China, Ting Chen University of Electronic Science and Technology of China, Ao Qiao University of Electronic Science and Technology of China, Xiapu Luo Hong Kong Polytechnic University, Leqing Wang University of Electronic Science and Technology of China, Zheyuan He University of Electronic Science and Technology of China, Ting Wang Penn State University, Xiaodong Lin University of Guelph, Peng He University of Electronic Science and Technology of China, Wensheng Zhang University of Electronic Science and Technology of China, Xiaosong Zhang University of Electronic Science and Technology of China
DOI
11:40
20m
Talk
Detecting Smart Contract State-Inconsistency Bugs via Flow Divergence and Multiplex Symbolic Execution
Research Papers
Yinxi Liu Rochester Institute of Technology, Wei Meng Chinese University of Hong Kong, Yinqian Zhang Southern University of Science and Technology
DOI
12:00
20m
Talk
Smart Contract Fuzzing Towards Profitable Vulnerabilities
Research Papers
Ziqiao Kong Nanyang Technological University, Cen Zhang Nanyang Technological University, Maoyi Xie Nanyang Technological University, Ming Hu Singapore Management University, Yue Xue MetaTrust Labs, Ye Liu Singapore Management University, Haijun Wang Xi'an Jiaotong University, Yang Liu Nanyang Technological University
DOI Pre-print File Attached
14:00 - 15:30
14:10
20m
Talk
Standing on the Shoulders of Giants: Bug-Aware Automated GUI Testing via Retrieval Augmentation
Research Papers
Mengzhuo Chen Institute of Software, Chinese Academy of Sciences, Zhe Liu Institute of Software, Chinese Academy of Sciences, Chunyang Chen TU Munich, Junjie Wang Institute of Software at Chinese Academy of Sciences, Boyu Wu University of Chinese Academy of Sciences, Beijing, China, Jun Hu Institute of Software, Chinese Academy of Sciences, Qing Wang Institute of Software at Chinese Academy of Sciences
DOI
14:30
20m
Talk
A Mixed-Methods Study of Model-Based GUI Testing in Real-World Industrial Settings
Research Papers
Shaoheng Cao Nanjing University, Renyi Chen Samsung Electronics(China)R&D Centre, Wenhua Yang Nanjing University of Aeronautics and Astronautics, Minxue Pan Nanjing University, Xuandong Li Nanjing University
DOI
15:10
20m
Talk
LLMDroid: Enhancing Automated Mobile App GUI Testing Coverage with Large Language Model Guidance
Research Papers
Chenxu Wang Huazhong University of Science and Technology, Tianming Liu Monash Univerisity, Yanjie Zhao Huazhong University of Science and Technology, Minghui Yang OPPO, Haoyu Wang Huazhong University of Science and Technology
DOI
14:00 - 15:30
ProcessIndustry Papers / Ideas, Visions and Reflections / Journal First / Research Papers at Aurora A
Chair(s): Trey Woodlief University of Virginia, United States
14:40
20m
Talk
Revolutionizing Newcomers' Onboarding Process in OSS Communities: The Future AI Mentor
Research Papers
Xin Tan Beihang University, Xiao Long , Yinghao Zhu Beihang University, Lin Shi Beihang University, Xiaoli Lian Beihang University, China, Li Zhang Beihang University
DOI Pre-print
14:00 - 15:30
14:40
20m
Talk
Directed Testing in MLIR: Unleashing Its Potential by Overcoming the Limitations of Random Fuzzing
Research Papers
Weiyuan Tong Northwest University, Zixu Wang Northwest University, Zhanyong Tang Northwest University, Jianbin Fang National University of Defense Technology, Yuqun Zhang Southern University of Science and Technology, Guixin Ye Northwest University
DOI
14:00 - 15:20
Empirical Studies 1Research Papers / Journal First at Cosmos 3A
Chair(s): Letizia Jaccheri Norwegian University of Science and Technology (NTNU)
14:00
20m
Talk
Core Developer Turnover in the Rust Package Ecosystem: Prevalence, Impact, and Awareness
Research Papers
Meng Fan Beijing Institute of Technology, Yuxia Zhang Beijing Institute of Technology, Klaas-Jan Stol Lero; University College Cork; SINTEF Digital , Hui Liu Beijing Institute of Technology
DOI
14:40
20m
Talk
An Empirical Study on Release-Wise Refactoring Patterns
Research Papers
Shayan Noei Queen's University, Heng Li Polytechnique Montréal, Ying Zou Queen's University, Kingston, Ontario
DOI
14:00 - 15:30
14:00
20m
Talk
LlamaRestTest: Effective REST API Testing with Small Language Models
Research Papers
Myeongsoo Kim Georgia Institute of Technology, Saurabh Sinha IBM Research, Alessandro Orso Georgia Institute of Technology
DOI
15:00
20m
Talk
TerzoN: Human-in-the-Loop Software Testing with a Composite Oracle
Research Papers
Matthew C. Davis Carnegie Mellon University, Amy Wei University of Michigan, Brad A. Myers Carnegie Mellon University, Joshua Sunshine Carnegie Mellon University
Link to publication DOI
14:00 - 15:30
LLM for SE 2Research Papers / Industry Papers / Ideas, Visions and Reflections at Cosmos Hall
Chair(s): Jialun Cao Hong Kong University of Science and Technology
14:20
20m
Talk
Integrating Large Language Models and Reinforcement Learning for Non-Linear Reasoning
Research Papers
Yoav Alon University of Bristol, Cristina David University of Bristol
DOI
14:40
20m
Talk
Smaller but Better: Self-Paced Knowledge Distillation for Lightweight yet Effective LCMs
Research Papers
Yujia Chen Harbin Institute of Technology, Shenzhen, Yang Ye Huawei Cloud Computing Technologies Co., Ltd., Zhongqi Li Huawei Cloud Computing Technologies Co., Ltd., Yuchi Ma Huawei Cloud Computing Technologies, Cuiyun Gao Harbin Institute of Technology, Shenzhen
DOI
15:10
20m
Talk
Bridging Operator Semantic Inconsistencies: A Source-level Cross-framework Model Conversion Approach
Research Papers
Xingpei Li National University of Defense Technology, China, Yan Lei Chongqing University, Zhouyang Jia National University of Defense Technology, Yuanliang Zhang National University of Defense Technology, Haoran Liu National University of Defense Technology, Liqian Chen National University of Defense Technology, Wei Dong National University of Defense Technology, Shanshan Li National University of Defense Technology
DOI
14:00 - 15:20
Program Analysis 2Research Papers / Ideas, Visions and Reflections / Demonstrations at Pirsenteret 150
Chair(s): Martin Kellogg New Jersey Institute of Technology
14:10
20m
Talk
Blended Analysis for Predictive Execution
Research Papers
Yi Li University of Texas at Dallas, Hridya Dhulipala University of Texas at Dallas, Aashish Yadavally University of Texas at Dallas, Xiaokai Rong University of Texas at Dallas, Shaohua Wang Central University of Finance and Economics, Tien N. Nguyen University of Texas at Dallas
DOI
14:30
20m
Talk
Revisiting Optimization-Resilience Claims in Binary Diffing Tools: Insights from LLVM Peephole Optimization Analysis
Research Papers
Xiaolei Ren Macau University of Science and Technology, Mengfei Ren University of Alabama in Huntsville, Jeff Yu Lei University of Texas at Arlington, Jiang Ming Tulane University, USA
DOI
14:50
20m
Talk
DyLin: A Dynamic Linter for Python
Research Papers
Aryaz Eghbali University of Stuttgart, Felix Burk University of Stuttgart, Michael Pradel University of Stuttgart
DOI Pre-print
16:00 - 17:40
Fairness and GreenJournal First / Research Papers / Demonstrations at Aurora A
Chair(s): Aldeida Aleti Monash University
17:00
20m
Talk
NLP Libraries, Energy Consumption and Runtime - An Empirical Study
Research Papers
Rajrupa Chattaraj Indian Institute of Technology Tirupati, India, Sridhar Chimalakonda Indian Institute of Technology Tirupati
DOI
17:20
20m
Talk
An adaptive language-agnostic pruning method for greener language models for code
Research Papers
Mootez Saad Dalhousie University, José Antonio Hernández López Linköping University, Boqi Chen McGill University, Daniel Varro Linköping University / McGill University, Tushar Sharma Dalhousie University
DOI Pre-print
16:00 - 17:40
Failure and FaultDemonstrations / Research Papers / Ideas, Visions and Reflections / Journal First at Aurora B
Chair(s): Lars Grunske Humboldt-Universität zu Berlin
16:10
20m
Talk
ReproCopilot: LLM-Driven Failure Reproduction with Dynamic Refinement
Research Papers
Tanakorn Leesatapornwongsa Microsoft Research, Fazle Faisal Microsoft Research, Suman Nath Microsoft Research
DOI
16:30
20m
Talk
Improving Graph Learning-Based Fault Localization with Tailored Semi-Supervised Learning
Research Papers
Chun Li Nanjing University, Hui Li Samsung Electronics (China) R&D Centre, Zhong Li , Minxue Pan Nanjing University, Xuandong Li Nanjing University
DOI
16:50
20m
Talk
Towards Understanding Docker Build Faults in Practice: Symptoms, Root Causes, and Fix Patterns
Research Papers
Yiwen Wu National University of Defense Technology, Yang Zhang National University of Defense Technology, China, Tao Wang National University of Defense Technology, Bo Ding National University of Defense Technology, Huaimin Wang
DOI
16:00 - 17:40
MSR 2Journal First / Ideas, Visions and Reflections / Research Papers / Demonstrations at Cosmos 3C
Chair(s): DongGyun Han Royal Holloway, University of London
16:10
20m
Talk
Scientific Open-Source Software Is Less Likely To Become Abandoned Than One Might Think! Lessons from Curating a Catalog of Maintained Scientific Software
Research Papers
Addi Malviya-Thakur The University of Tennessee, Knoxville / Oak Ridge National Laboratory, Reed Milewicz Sandia National Laboratories, Mahmoud Jahanshahi University of Tennessee, Lavinia Francesca Paganini Eindhoven University of Technology, Bogdan Vasilescu Carnegie Mellon University, Audris Mockus University of Tennessee
Link to publication DOI
16:30
20m
Talk
Who Will Stop Contributing to OSS Projects? Predicting Company Turnover Based on Initial Behavior
Research Papers
Mian Qin Beijing Institute of Technology, Yuxia Zhang Beijing Institute of Technology, Klaas-Jan Stol Lero; University College Cork; SINTEF Digital , Hui Liu Beijing Institute of Technology
DOI
17:20
20m
Talk
Impact of Request Formats on Effort Estimation: Are LLMs Different than Humans?
Research Papers
Gül Calikli University of Glasgow, Mohammed Alhamed Applied Behaviour Systems LTD (Hexis), United Kingdom
DOI
16:00 - 17:40
Anomaly DetectionIdeas, Visions and Reflections / Research Papers / Industry Papers at Pirsenteret 150
Chair(s): Gias Uddin York University, Canada
16:00
20m
Talk
Cross-System Categorization of Abnormal Traces in Microservice-Based Systems via Meta-Learning
Research Papers
Yuqing Wang University of Helsinki, Finland, Mika Mäntylä University of Helsinki and University of Oulu, Serge Demeyer University of Antwerp and Flanders Make vzw, Mutlu Beyazıt University of Antwerp and Flanders Make vzw, Joanna Kisaakye University of Antwerp, Belgium, Jesse Nyyssölä University of Helsinki
DOI
16:40
20m
Talk
CAShift: Benchmarking Log-Based Cloud Attack Detection under Normality Shift
Research Papers
Jiongchi Yu Singapore Management University, Xiaofei Xie Singapore Management University, Qiang Hu Tianjin University, Bowen Zhang Singapore Management University, Ziming Zhao Zhejiang University, Yun Lin Shanghai Jiao Tong University, Lei Ma The University of Tokyo & University of Alberta, Ruitao Feng Southern Cross University, Frank Liauw Government Technology Agency Singapore
DOI Pre-print
17:00
20m
Talk
Detecting and Handling WoT Violations by Learning Physical Interactions from Device Logs
Research Papers
Bingkun Sun Fudan University, Shiqi Sun Northwestern Polytechnique University, Jialin Ren Fudan University, Mingming Hu Fudan University, Kun Hu School of Computer Science, Fudan University, Liwei Shen Fudan University, Xin Peng Fudan University
DOI

Wed 25 Jun

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

11:00 - 12:30
12:10
20m
Talk
Prompts Are Programs Too! Understanding How Developers Build Software Containing Prompts
Research Papers
Jenny T. Liang Carnegie Mellon University, Melissa Lin Carnegie Mellon University, Nikitha Rao Carnegie Mellon University, Brad A. Myers Carnegie Mellon University
DOI
11:00 - 12:30
11:00
20m
Talk
De-duplicating Silent Compiler Bugs via Deep Semantic Representation
Research Papers
Junjie Chen Tianjin University, Xingyu Fan Tianjin University, Chen Yang Tianjin University, Shuang Liu Renmin University of China, Jun Sun Singapore Management University
DOI
11:20
20m
Talk
DiSCo: Towards Decompiling EVM Bytecode to Source Code using Large Language Models
Research Papers
Xing Su National Key Lab for Novel Software Technology, Nanjing University, China, Hanzhong Liang National Key Lab for Novel Software Technology, Nanjing University, China, Hao Wu , Ben Niu State Key Laboratory of Information Security, Institute of Information Engineering, China, Fengyuan Xu National Key Lab for Novel Software Technology, Nanjing University, China, Sheng Zhong National Key Lab for Novel Software Technology, Nanjing University, China
DOI
12:00
20m
Talk
PDCAT: Preference-Driven Compiler Auto-Tuning
Research Papers
Mingxuan Zhu Peking University, Zeyu Sun Institute of Software, Chinese Academy of Sciences, Dan Hao Peking University
DOI
11:00 - 12:20
Program Analysis 3Research Papers / Demonstrations / Industry Papers at Cosmos 3D
Chair(s): Earl T. Barr University College London
11:40
20m
Talk
Towards Diverse Program Transformations for Program Simplification
Research Papers
Haibo Wang Concordia University, Zezhong Xing Southern University of Science and Technology, Chengnian Sun University of Waterloo, Zheng Wang University of Leeds, Shin Hwei Tan Concordia University
DOI
12:00
20m
Talk
CRISPE: Semantic-Guided Execution Planning and Dynamic Reasoning for Enhancing Code Coverage Prediction
Research Papers
Hridya Dhulipala University of Texas at Dallas, Aashish Yadavally University of Texas at Dallas, Smit Soneshbhai Patel University of Texas at Dallas, Tien N. Nguyen University of Texas at Dallas
DOI
11:00 - 12:30
SE and AI 2Ideas, Visions and Reflections / Research Papers at Cosmos Hall
Chair(s): Massimiliano Di Penta University of Sannio, Italy
11:00
20m
Talk
Beyond PEFT: Layer-Wise Optimization for More Effective and Efficient Large Code Model Tuning
Research Papers
Chaozheng Wang The Chinese University of Hong Kong, jiafeng University of Electronic Science and Technology of China, Shuzheng Gao Chinese University of Hong Kong, Cuiyun Gao Harbin Institute of Technology, Shenzhen, Li Zongjie Hong Kong University of Science and Technology, Ting Peng Tencent Inc., Hailiang Huang Tencent Inc., Yuetang Deng Tencent, Michael Lyu Chinese University of Hong Kong
DOI
11:20
20m
Talk
Automated Trustworthiness Oracle Generation for Machine Learning Text Classifiers
Research Papers
Lam Nguyen Tung Monash University, Australia, Steven Cho The University of Auckland, New Zealand, Xiaoning Du Monash University, Neelofar Neelofar Royal Melbourne Institure of Techonlogy (RMIT), Valerio Terragni University of Auckland, Stefano Ruberto JRC European Commission, Aldeida Aleti Monash University
DOI Pre-print
11:40
20m
Talk
A Causal Learning Framework for Enhancing Robustness of Source Code Models
Research Papers
Junyao Ye Huazhong University of Science and Technology, Zhen Li Huazhong University of Science and Technology, Xi Tang Huazhong University of Science and Technology, Deqing Zou Huazhong University of Science and Technology, Shouhuai Xu University of Colorado Colorado Springs, Qiang Weizhong Huazhong University of Science and Technology, Hai Jin Huazhong University of Science and Technology
DOI
12:00
20m
Talk
Eliminating Backdoors in Neural Code Models for Secure Code Understanding
Research Papers
Weisong Sun Nanjing University, Yuchen Chen Nanjing University, Chunrong Fang Nanjing University, Yebo Feng Nanyang Technological University, Yuan Xiao Nanjing University, An Guo Nanjing University, Quanjun Zhang School of Computer Science and Engineering, Nanjing University of Science and Technology, Zhenyu Chen Nanjing University, Baowen Xu Nanjing University, Yang Liu Nanyang Technological University
DOI
11:00 - 12:30
Software TestsJournal First / Demonstrations / Research Papers at Pirsenteret 150
Chair(s): Tien N. Nguyen University of Texas at Dallas
11:50
20m
Talk
Automated Unit Test Refactoring
Research Papers
Yi Gao Zhejiang University, Xing Hu Zhejiang University, Xiaohu Yang Zhejiang University, Xin Xia Zhejiang University
DOI
12:10
20m
Talk
Understanding and Characterizing Mock Assertions in Unit Tests
Research Papers
Hengcheng Zhu The Hong Kong University of Science and Technology, Valerio Terragni University of Auckland, Lili Wei McGill University, Shing-Chi Cheung Hong Kong University of Science and Technology, Jiarong Wu , Yepang Liu Southern University of Science and Technology
DOI Pre-print
11:00 - 12:30
11:00
20m
Talk
ChatDBG: Augmenting Debugging with Large Language Models
Research Papers
Kyla H. Levin University of Massachusetts Amherst, USA, Nicolas van Kempen University of Massachusetts Amherst, USA, Emery D. Berger University of Massachusetts Amherst and Amazon Web Services, Stephen N. Freund Williams College
DOI Pre-print
11:30
20m
Talk
Empirically Evaluating the Impact of Object-Centric Breakpoints on the Debugging of Object-Oriented Programs
Research Papers
Valentin Bourcier INRIA, Pooja Rani University of Zurich, Maximilian Ignacio Willembrinck Santander Univ. Lille, Inria, CNRS, Centrale Lille, UMR 9189 CRIStAL F-59000 Lille, France, Alberto Bacchelli University of Zurich, Steven Costiou INRIA Lille
DOI
11:50
20m
Talk
An Empirical Study of Bugs in Data Visualization Libraries
Research Papers
Weiqi Lu The Hong Kong University of Science and Technology, Yongqiang Tian , Xiaohan Zhong The Hong Kong University of Science and Technology, Haoyang Ma Hong Kong University of Science and Technology, Zhenyang Xu University of Waterloo, Shing-Chi Cheung Hong Kong University of Science and Technology, Chengnian Sun University of Waterloo
DOI
12:10
20m
Talk
DuoReduce: Bug Isolation for Multi-Layer Extensible Compilation
Research Papers
Jiyuan Wang University of California at Los Angeles, Yuxin Qiu University of California at Riverside, Ben Limpanukorn University of California, Los Angeles, Hong Jin Kang University of Sydney, Qian Zhang University of California at Riverside, Miryung Kim UCLA and Amazon Web Services
DOI Pre-print
14:00 - 15:30
14:30
20m
Talk
Demystifying LLM-based Software Engineering Agents
Research Papers
Chunqiu Steven Xia University of Illinois at Urbana-Champaign, Yinlin Deng University of Illinois at Urbana-Champaign, Soren Dunn University of Illinois Urbana-Champaign, Lingming Zhang University of Illinois at Urbana-Champaign
DOI
14:00 - 15:30
14:20
20m
Talk
The Landscape of Toxicity: An Empirical Investigation of Toxicity on GitHub
Research Papers
Jaydeb Sarker University of Nebraska at Omaha, Asif Kamal Turzo Wayne State University, Amiangshu Bosu Wayne State University
DOI Pre-print
14:40
20m
Talk
Expressing and Checking Statistical Assumptions
Research Papers
Alexi Turcotte CISPA, Zheyuan Wu Saarland University
DOI
15:00
20m
Talk
Why the Proof Fails in Different Versions of Theorem Provers: An Empirical Study of Compatibility Issues in Isabelle
Research Papers
Xiaokun Luan Peking University, David Sanan Singapore Institute of Technology, Zhe Hou Griffith University, Qiyuan Xu Nanyang Technological University, Chengwei Liu Nanyang Technological University, Yufan Cai National University of Singapore, Yang Liu Nanyang Technological University, Meng Sun Peking University
DOI
14:00 - 15:20
Testing 4Industry Papers / Research Papers / Demonstrations at Cosmos 3D
Chair(s): Antonio Mastropaolo William and Mary, USA
14:00
20m
Talk
Detecting and Reducing the Factual Hallucinations of Large Language Models with Metamorphic Testing
Research Papers
Weibin Wu Sun Yat-sen University, Yuhang Cao Sun Yat-sen University, Ning Yi Sun Yat-sen University, Rongyi Ou Sun Yat-sen University, Zibin Zheng Sun Yat-sen University
DOI
14:50
20m
Talk
Adaptive Random Testing with Qgrams: the Illusion Comes True
Research Papers
Matteo Biagiola Università della Svizzera italiana, Robert Feldt Chalmers | University of Gothenburg, Paolo Tonella USI Lugano
DOI Pre-print
14:00 - 15:20
LLM for SE 4Research Papers / Journal First at Cosmos Hall
Chair(s): Ting Su East China Normal University
14:20
20m
Talk
Calibration of Large Language Models on Code Summarization
Research Papers
Yuvraj Virk UC Davis, Prem Devanbu University of California at Davis, Toufique Ahmed IBM Research
DOI
14:40
20m
Talk
Code Red! On the Harmfulness of Applying Off-the-shelf Large Language Models to Programming Tasks
Research Papers
Ali Al-Kaswan Delft University of Technology, Netherlands, Sebastian Deatc Delft University of Technology, Begüm Koç Delft University of Technology, Arie van Deursen TU Delft, Maliheh Izadi Delft University of Technology
DOI Pre-print
14:00 - 15:30
Program Analysis 4Demonstrations / Journal First / Research Papers at Pirsenteret 150
Chair(s): Matthew B Dwyer University of Virginia
14:10
20m
Talk
Recasting Type Hints from WebAssembly Contracts
Research Papers
Kunsong Zhao The Hong Kong Polytechnic University, Zihao Li Hong Kong Polytechnic University, Weimin Chen The Hong Kong Polytechnic University, Xiapu Luo Hong Kong Polytechnic University, Ting Chen University of Electronic Science and Technology of China, Guozhu Meng Institute of Information Engineering, Chinese Academy of Sciences, Yajin Zhou Zhejiang University; ZJU-Hangzhou Global Scientific and Technological Innovation Center
DOI
14:30
20m
Talk
Medusa: A Framework for Collaborative Development of Foundation Models with Automated Parameter Ownership Assignment
Research Papers
Dezhi Ran Peking University, Yuan Cao Peking University, Yuzhe Guo Beijing Jiaotong University, Yuetong Li The University of Chicago, Mengzhou Wu Peking University, Simin Chen University of Texas at Dallas, Wei Yang UT Dallas, Tao Xie Peking University
DOI
14:00 - 15:30
DependencyResearch Papers / Journal First / Demonstrations at Vega
Chair(s): Alexandre Bartel Umeå University
14:00
20m
Talk
Automatically fixing dependency breaking changes
Research Papers
Lukas Fruntke University College London, Jens Krinke University College London
DOI
14:30
20m
Talk
Pinning Is Futile: You Need More Than Local Dependency Versioning to Defend Against Supply Chain Attacks
Research Papers
Hao He Carnegie Mellon University, Bogdan Vasilescu Carnegie Mellon University, Christian Kästner Carnegie Mellon University
DOI
15:10
20m
Talk
On the Characteristics and Impacts of Protestware Libraries
Research Papers
Tanner Finken University of Arizona, Jesse Chen University of Arizona, Sazzadur Rahaman University of Arizona, Tucson, Arizona, USA
DOI

Unscheduled Events

Not scheduled
Talk
Alert Summarization for Online Service Systems by Validating Propagation Paths of Faults
Research Papers
ChenJ , Yuang He Fudan University, Peng Wang Fudan University, XiaoLei Chen Fudan University, Jie Shi Fudan University, Wei Wang Fudan University
DOI
Not scheduled
Talk
Ransomware Detection through Temporal Correlation between Encryption and I/O Behavior
Research Papers
Lihua Guo Tsinghua University, Yiwei Hou Tsinghua University, Chijin Zhou Tsinghua University, Quan Zhang Tsinghua University, Yu Jiang Tsinghua University
DOI
Not scheduled
Talk
RegTrieve: Reducing System-Level Regression Errors for Machine Learning Systems via Retrieval-Enhanced Ensemble
Research Papers
Junming Cao Fudan University, Xuwen Xiang Fudan University, China, Mingfei Cheng Singapore Management University, Bihuan Chen Fudan University, Xinyan Wang Fudan University, China, You Lu Fudan University, Chaofeng Sha Fudan University, Xiaofei Xie Singapore Management University, Xin Peng Fudan University
DOI
Not scheduled
Talk
Software Fairness Dilemma: Is Bias Mitigation a Zero-Sum Game?
Research Papers
Zhenpeng Chen Nanyang Technological University, Xinyue Li Peking University, Jie M. Zhang King's College London, Weisong Sun Nanyang Technological University, Ying Xiao King's College London, Li Tianlin Nanyang Technological University, Yiling Lou Fudan University, Yang Liu Nanyang Technological University
DOI
Not scheduled
Talk
Automated Recognition of Buggy Behaviors from Mobile Bug Reports
Research Papers
Zhaoxu Zhang University of Southern California, Komei Ryu University of Southern California, Tingting Yu University of Connecticut, William G.J. Halfond University of Southern California
DOI
Not scheduled
Talk
CKTyper: Enhancing Type Inference for Java Code Snippets by Leveraging Crowdsourcing Knowledge in Stack Overflow
Research Papers
Anji Li Sun Yat-sen University, Neng Zhang Central China Normal University, Ying Zou Queen's University, Kingston, Ontario, Zhixiang Chen Sun Yat-sen University, Jian Wang Wuhan University, Zibin Zheng Sun Yat-sen University
DOI
Not scheduled
Talk
Automated Extraction and Analysis of Developer’s Rationale in Open Source Software
Research Papers
Mouna Dhaouadi University of Montreal, Bentley Oakes Polytechnique Montréal, Michalis Famelis Université de Montréal
DOI

Accepted Papers

Title
10 years later: revisiting how developers search for code
Research Papers
DOI
A Causal Learning Framework for Enhancing Robustness of Source Code Models
Research Papers
DOI
A Comprehensive Study of Bug-Fix Patterns in Autonomous Driving Systems
Research Papers
DOI Pre-print
Adaptive Random Testing with Qgrams: the Illusion Comes True
Research Papers
DOI Pre-print
A Knowledge Enhanced Large Language Model for Bug Localization
Research Papers
DOI
Alert Summarization for Online Service Systems by Validating Propagation Paths of Faults
Research Papers
DOI
AlphaTrans: A Neuro-Symbolic Compositional Approach for Repository-Level Code Translation and Validation
Research Papers
DOI Pre-print Media Attached
A Mixed-Methods Study of Model-Based GUI Testing in Real-World Industrial Settings
Research Papers
DOI
An adaptive language-agnostic pruning method for greener language models for code
Research Papers
DOI Pre-print
An Empirical Study of Bugs in Data Visualization Libraries
Research Papers
DOI
An Empirical Study of Code Clones from Commercial AI Code Generators
Research Papers
DOI
An Empirical Study of Suppressed Static Analysis Warnings
Research Papers
DOI
An Empirical Study on Release-Wise Refactoring Patterns
Research Papers
DOI
A New Approach to Evaluating Nullability Inference Tools
Research Papers
DOI
Automated and Accurate Token Transfer Identification and Its Applications in Cryptocurrency Security
Research Papers
DOI
Automated Extraction and Analysis of Developer’s Rationale in Open Source Software
Research Papers
DOI
Automated Recognition of Buggy Behaviors from Mobile Bug Reports
Research Papers
DOI
Automated Soap Opera Testing Directed by LLMs and Scenario Knowledge: Feasibility, Challenges, and Road Ahead
Research Papers
DOI
Automated Trustworthiness Oracle Generation for Machine Learning Text Classifiers
Research Papers
DOI Pre-print
Automated Unit Test Refactoring
Research Papers
DOI
Automatically Detecting Numerical Instability in Machine Learning Applications via Soft Assertions
Research Papers
DOI
Automatically fixing dependency breaking changes
Research Papers
DOI
Beyond Functional Correctness: Investigating Coding Style Inconsistencies in Large Language Models
Research Papers
DOI
Beyond PEFT: Layer-Wise Optimization for More Effective and Efficient Large Code Model Tuning
Research Papers
DOI
Blended Analysis for Predictive Execution
Research Papers
DOI
Bridging Operator Semantic Inconsistencies: A Source-level Cross-framework Model Conversion Approach
Research Papers
DOI
Calibration of Large Language Models on Code Summarization
Research Papers
DOI
CAShift: Benchmarking Log-Based Cloud Attack Detection under Normality Shift
Research Papers
DOI Pre-print
ChangeGuard: Validating Code Changes via Pairwise Learning-Guided Execution
Research Papers
DOI
ChatDBG: Augmenting Debugging with Large Language Models
Research Papers
DOI Pre-print
CKTyper: Enhancing Type Inference for Java Code Snippets by Leveraging Crowdsourcing Knowledge in Stack Overflow
Research Papers
DOI
Clone Detection for Smart Contracts: How Far Are We?
Research Papers
DOI
Code Change Intention, Development Artifact and History Vulnerability: Putting Them Together for Vulnerability Fix Detection by LLM
Research Papers
DOI
Code Red! On the Harmfulness of Applying Off-the-shelf Large Language Models to Programming Tasks
Research Papers
DOI Pre-print
COFFE: A Code Efficiency Benchmark for Code Generation
Research Papers
DOI
Core Developer Turnover in the Rust Package Ecosystem: Prevalence, Impact, and Awareness
Research Papers
DOI
CoverUp: Effective High Coverage Test Generation for Python
Research Papers
DOI Pre-print
CRISPE: Semantic-Guided Execution Planning and Dynamic Reasoning for Enhancing Code Coverage Prediction
Research Papers
DOI
Cross-System Categorization of Abnormal Traces in Microservice-Based Systems via Meta-Learning
Research Papers
DOI
CXXCrafter: An LLM-Based Agent for Automated C/C++ Open Source Software Building
Research Papers
DOI
DeclarUI: Bridging Design and Development with Automated Declarative UI Code Generation
Research Papers
DOI
De-duplicating Silent Compiler Bugs via Deep Semantic Representation
Research Papers
DOI
Demystifying LLM-based Software Engineering Agents
Research Papers
DOI
Demystifying Memorization in LLM-based Program Repair via a General Hypothesis Testing Framework
Research Papers
DOI
Detecting and Handling WoT Violations by Learning Physical Interactions from Device Logs
Research Papers
DOI
Detecting and Reducing the Factual Hallucinations of Large Language Models with Metamorphic Testing
Research Papers
DOI
Detecting Metadata-Related Bugs in Enterprise Applications
Research Papers
DOI
Detecting Smart Contract State-Inconsistency Bugs via Flow Divergence and Multiplex Symbolic Execution
Research Papers
DOI
Directed Testing in MLIR: Unleashing Its Potential by Overcoming the Limitations of Random Fuzzing
Research Papers
DOI
DiSCo: Towards Decompiling EVM Bytecode to Source Code using Large Language Models
Research Papers
DOI
Dissecting Real-World Cross-Language Bugs
Research Papers
DOI
Divide-and-Conquer: Generating UI Code from Screenshots
Research Papers
DOI
Doc2OracLL: Investigating the Impact of Documentation on LLM-based Test Oracle Generation
Research Papers
DOI
DuoReduce: Bug Isolation for Multi-Layer Extensible Compilation
Research Papers
DOI Pre-print
DyLin: A Dynamic Linter for Python
Research Papers
DOI Pre-print
Dynamic Taint Tracking for Modern Java Virtual Machines
Research Papers
DOI
Element-Based Automated DNN Repair with Fine-Tuned Masked Language Model
Research Papers
DOI
Eliminating Backdoors in Neural Code Models for Secure Code Understanding
Research Papers
DOI
Empirically Evaluating the Impact of Object-Centric Breakpoints on the Debugging of Object-Oriented Programs
Research Papers
DOI
Enhancing Web Accessibility: Automated Detection of Issues with Generative AI
Research Papers
DOI
Error Delayed is Not Error Handled: Understanding and Fixing Propagated Error-Handling Bugs
Research Papers
DOI
Expressing and Checking Statistical Assumptions
Research Papers
DOI
Gleipner: A Benchmark for Gadget Chain Detection in Java Deserialization Vulnerabilities
Research Papers
DOI
Hallucination Detection in Large Language Models with Metamorphic Relations
Research Papers
DOI
Has My Code Been Stolen for Model Training? A Naturalness Based Approach to Code Contamination Detection
Research Papers
DOI
HornBro: Homotopy-like Method for Automated Quantum Program Repair
Research Papers
DOI
How Do Programming Students Use Generative AI?
Research Papers
DOI Pre-print
Impact of Request Formats on Effort Estimation: Are LLMs Different than Humans?
Research Papers
DOI
Improving Graph Learning-Based Fault Localization with Tailored Semi-Supervised Learning
Research Papers
DOI
Incorporating Verification Standards for Security Requirements Generation from Functional Specifications
Research Papers
DOI
Integrating Large Language Models and Reinforcement Learning for Non-Linear Reasoning
Research Papers
DOI
IRepair: An Intent-Aware Approach to Repair Data-Driven Errors in Large Language Models
Research Papers
DOI
It’s Acting Odd! Exploring Equivocal Behaviors of Goodware
Research Papers
DOI
Large Language Models for In-File Vulnerability Localization can be “Lost in the End”
Research Papers
DOI
Less is More: On the Importance of Data Quality for Unit Test Generation
Research Papers
DOI
Liberating libraries through automated fuzz driver generation: Striking a Balance Without Consumer Code
Research Papers
DOI
LlamaRestTest: Effective REST API Testing with Small Language Models
Research Papers
DOI
LLM-based Method Name Suggestion with Automatically Generated Context-Rich Prompts
Research Papers
DOI
LLMDroid: Enhancing Automated Mobile App GUI Testing Coverage with Large Language Model Guidance
Research Papers
DOI
LookAhead: Preventing DeFi Attacks via Unveiling Adversarial Contracts
Research Papers
DOI Pre-print
Medusa: A Framework for Collaborative Development of Foundation Models with Automated Parameter Ownership Assignment
Research Papers
DOI
MendelFuzz: The Return of the Deterministic Stage
Research Papers
DOI
MiSum: Multi-Modality Heterogeneous Code Graph Learning for Multi-Intent Binary Code Summarization
Research Papers
DOI
Mitigating Emergent Malware Label Noise in DNN-Based Android Malware Detection
Research Papers
DOI
Multi-Modal Traffic Scenario Generation for Autonomous Driving System Testing
Research Papers
DOI Pre-print
Mystique: Automated Vulnerability Patch Porting with Semantic and Syntactic-Enhanced LLM
Research Papers
DOI
NLP Libraries, Energy Consumption and Runtime - An Empirical Study
Research Papers
DOI
No More Labelled Examples? An Unsupervised Log Parser with LLMs
Research Papers
DOI
On-Demand Scenario Generation for Testing Automated Driving Systems
Research Papers
DOI Pre-print
One-for-All Does Not Work! Enhancing Vulnerability Detection by Mixture-of-Experts (MoE)
Research Papers
DOI
On the Characteristics and Impacts of Protestware Libraries
Research Papers
DOI
On the Unnecessary Complexity of Names in X.509 and Their Impact on Implementations
Research Papers
DOI
PDCAT: Preference-Driven Compiler Auto-Tuning
Research Papers
DOI
Pinning Is Futile: You Need More Than Local Dependency Versioning to Defend Against Supply Chain Attacks
Research Papers
DOI
Prompts Are Programs Too! Understanding How Developers Build Software Containing Prompts
Research Papers
DOI
Protecting Privacy in Software Logs: What Should be Anonymized?
Research Papers
DOI
QSF: Multi-Objective Optimization based Efficient Solving for Floating-Point Constraints
Research Papers
DOI
Ransomware Detection through Temporal Correlation between Encryption and I/O Behavior
Research Papers
DOI
Recasting Type Hints from WebAssembly Contracts
Research Papers
DOI
RegTrieve: Reducing System-Level Regression Errors for Machine Learning Systems via Retrieval-Enhanced Ensemble
Research Papers
DOI
ReproCopilot: LLM-Driven Failure Reproduction with Dynamic Refinement
Research Papers
DOI
RePurr: Automated Repair of Block-Based Learners' Programs
Research Papers
DOI
Revisiting Optimization-Resilience Claims in Binary Diffing Tools: Insights from LLVM Peephole Optimization Analysis
Research Papers
DOI
Revolutionizing Newcomers' Onboarding Process in OSS Communities: The Future AI Mentor
Research Papers
DOI Pre-print
ROSCallBaX: Statically Detecting Inconsistencies In Callback Function Setup of Robotic Systems
Research Papers
DOI
Scene Flow Specifications: Encoding and Monitoring Rich Temporal Safety Properties of Autonomous Systems
Research Papers
DOI
Scientific Open-Source Software Is Less Likely To Become Abandoned Than One Might Think! Lessons from Curating a Catalog of Maintained Scientific Software
Research Papers
Link to publication DOI
SemBIC: Semantic-aware Identification of Bug-inducing Commits
Research Papers
DOI
Smaller but Better: Self-Paced Knowledge Distillation for Lightweight yet Effective LCMs
Research Papers
DOI
Smart Contract Fuzzing Towards Profitable Vulnerabilities
Research Papers
DOI Pre-print File Attached
SmartNote: An LLM-Powered, Personalised Release Note Generator That Just Works
Research Papers
DOI
SmartShot: Hunt Hidden Vulnerabilities in Smart Contracts using Mutable Snapshots
Research Papers
DOI
Software Fairness Dilemma: Is Bias Mitigation a Zero-Sum Game?
Research Papers
DOI
Standing on the Shoulders of Giants: Bug-Aware Automated GUI Testing via Retrieval Augmentation
Research Papers
DOI
Statement-level Adversarial Attack on Vulnerability Detection Models via Out-Of-Distribution Features
Research Papers
DOI
Teaching AI the ‘Why’ and ‘How’ of Software Vulnerability Fixes
Research Papers
DOI
TerzoN: Human-in-the-Loop Software Testing with a Composite Oracle
Research Papers
Link to publication DOI
The Landscape of Toxicity: An Empirical Investigation of Toxicity on GitHub
Research Papers
DOI Pre-print
The Struggles of LLMs in Cross-lingual Code Clone Detection
Research Papers
DOI
Today's cat is tomorrow's dog: accounting for time-based changes in the labels of ML vulnerability detection approaches
Research Papers
DOI Pre-print
Towards Diverse Program Transformations for Program Simplification
Research Papers
DOI
Towards Understanding Docker Build Faults in Practice: Symptoms, Root Causes, and Fix Patterns
Research Papers
DOI
Towards Understanding Fine-Grained Programming Mistakes and Fixing Patterns in Data Science
Research Papers
DOI
Towards Understanding Performance Bugs in Popular Data Science Libraries
Research Papers
DOI
TracePicker: Optimization-based Trace Sampling for Microservice-based Systems
Research Papers
DOI
Understanding and Characterizing Mock Assertions in Unit Tests
Research Papers
DOI Pre-print
Understanding Debugging as Episodes: A Case Study on Performance Bugs in Configurable Software Systems
Research Papers
DOI
Understanding Industry Perspectives of Static Application Security Testing (SAST) Evaluation
Research Papers
DOI
UnitCon: Synthesizing Targeted Unit Tests for Java Runtime Exceptions
Research Papers
DOI
Unlocking Optimal ORM Database Designs: Accelerated Tradeoff Analysis with Transformers
Research Papers
DOI Pre-print
VLATest: Testing and Evaluating Vision-Language-Action Models for Robotic Manipulation
Research Papers
DOI Pre-print
VulPA: Detecting Semantically Recurring Vulnerabilities with Multi-Object Typestate Analysis
Research Papers
DOI
Who Will Stop Contributing to OSS Projects? Predicting Company Turnover Based on Initial Behavior
Research Papers
DOI
Why the Proof Fails in Different Versions of Theorem Provers: An Empirical Study of Compatibility Issues in Isabelle
Research Papers
DOI
Zero-Shot Cross-Domain Code Search without Fine-Tuning
Research Papers
DOI

Call for Papers

We invite high-quality submissions, from both industry and academia, describing original and unpublished results of theoretical, empirical, conceptual, and experimental software engineering research.

Contributions should describe innovative and significant original research. Papers describing groundbreaking approaches to emerging problems are also welcome, as well as replication papers. Submissions that facilitate reproducibility by using available datasets or making the described tools and datasets publicly available are especially encouraged. For a list of specific topics of interest, please see the end of this call.

Note #1: The Proceedings of the ACM on Software Engineering (PACMSE) Issue FSE 2025 seeks contributions through submissions in this track. Accepted papers will be invited for presentation at FSE 2025. Approval has been granted by ACM in July 2023. PACMSE will be the only proceedings where accepted research track papers will be published. Please check the FAQ for details.

Note #2: The steering committee has decided that starting from 2024 the conference name will be changed to ACM International Conference on the Foundations of Software Engineering (FSE).

Note #3: Based on the coordination among FSE, ICSE, and ASE steering committees, the FSE conference and submission dates have been moved earlier, similarly to FSE 2024 deadlines. The intention is for this schedule to remain stable in the years ahead and the conference and submission deadlines of the three large general software engineering conferences to be spread out throughout the year.

Note #4: Submissions must follow the “ACM Policy on Authorship” released April 20, 2023, which contains policy regarding the use of Generative AI tools and technologies, such as ChatGPT. Please also check the ACM FAQ which describes in what situations generative AI tools can be used (with or without acknowledgement).

Note #5: The names and list of authors as well as the title in the camera-ready version cannot be modified from the ones in the submitted version unless there is explicit approval from the track chairs.

Note #6: Submissions that are changing the required submission format to gain additional space will be desk rejected. Examples of changing format include removing the ACM Reference format and the permission to make digital or hard copy footnotes on the first page.

Tracks

This CFP refers to the Research Track of FSE 2025. For the remaining tracks, please check the specific calls on the website: https://conf.researchr.org/home/fse-2025

HOW TO SUBMIT

The following only applies to the main track of FSE. For the other tracks please see the general formatting instructions.

At the time of submission, each paper should have no more than 18 pages for all text and figures, plus 4 pages for references, using the following templates: Latex or Word (Mac) or Word (Windows). Authors using LaTeX should use the sample-acmsmall-conf.tex file (found in the samples folder of the acmart package) with the acmsmall option. We also strongly encourage the use of the review, screen, and anonymous options as well. In summary, you want to use: \documentclass[acmsmall,screen,review,anonymous]{acmart}. Papers may use either numeric or author-year format for citations. It is a single-column page layout. Submissions that do not comply with the above instructions will be desk rejected without review. Papers must be submitted electronically through the FSE 2025 submission site:

https://fse2025.hotcrp.com

Each submission will be reviewed by at least three members of the program committee. The review process is that the initial output can be accept, reject or major revision. When the initial output of the three reviews is major revision, authors will have an opportunity to address the reviewers’ requests during a 6-week major revision period. Such requests may include additional experiments or new analyses of existing results; major rewriting of algorithms and explanations; clarifications, better scoping, and improved motivations. The revised submission must be accompanied by a response letter, where the authors explain how they addressed each concern expressed by the reviewers. The same reviewers who requested major revisions will then assess whether the revised submission satisfies their requests adequately.

Submissions will be evaluated on the basis of originality, importance of contribution, soundness, evaluation (if relevant), quality of presentation, and appropriate comparison to related work. Some papers may have more than three reviews, as PC chairs may solicit additional reviews based on factors such as reviewer expertise and strong disagreement between reviewers. The program committee as a whole will make final decisions about which submissions to accept for publication.

In addition to declaring the topics which are relevant for their submissions, authors will be asked to declare the research methods employed in their submissions. This will enable us to ensure reviewer expertise both for research methods and topics. For full definitions of the research methods, see the SIGSOFT Empirical Standards.

Double-Anonymous Review Process

In order to ensure the fairness of the reviewing process, the FSE 2025 Research Papers Track will employ a double-anonymous review process, where reviewers do not know the identity of authors, and authors do not know the identity of external reviewers. The papers submitted must not reveal the authors’ identities in any way:

  • Authors should leave out author names and affiliations from the body of their submission.
  • Authors should ensure that any citation to related work by themselves is written in third person, that is, “the prior work of XYZ” as opposed to “our prior work”.
  • Authors should not include URLs to author-revealing sites (tools, datasets). Authors are still encouraged to follow open science principles and submit replication packages, see more details on the open science policy below.
  • Authors should anonymize author-revealing company names but instead provide general characteristics of the organizations involved needed to understand the context of the paper.
  • Authors should ensure that paper acknowledgements do not reveal the origin of their work.
  • While authors have the right to upload preprints on ArXiV or similar sites, they should avoid specifying that the manuscript was submitted to FSE 2025.
  • During review, authors should not publicly use the submission title.

The double-anonymous process used this year is “heavy”, i.e., the paper anonymity will be maintained during all reviewing and discussion periods. In case of major revision, authors must therefore maintain anonymity in their response letter and must provide no additional information that could be author-revealing.

To facilitate double-anonymous reviewing, we recommend the authors to postpone publishing their submitted work on arXiv or similar sites until after the notification. If the authors have uploaded to arXiv or similar, they should avoid specifying that the manuscript was submitted to FSE 2025.

Authors with further questions on double-anonymous reviewing are encouraged to contact the program chairs by email. Papers that do not comply with the double-anonymous review process will be desk-rejected.

Submission Policies

The authors must follow the “ACM Policy on Authorship” released April 20, 2023 and its accompanying FAQ including the following points:

  • “Generative AI tools and technologies, such as ChatGPT, may not be listed as authors of an ACM published Work. The use of generative AI tools and technologies to create content is permitted but must be fully disclosed in the Work. For example, the authors could include the following statement in the Acknowledgements section of the Work: ChatGPT was utilized to generate sections of this Work, including text, tables, graphs, code, data, citations, etc.). If you are uncertain about the need to disclose the use of a particular tool, err on the side of caution, and include a disclosure in the acknowledgements section of the Work.”
  • “If you are using generative AI software tools to edit and improve the quality of your existing text in much the same way you would use a typing assistant like Grammarly to improve spelling, grammar, punctuation, clarity, engagement or to use a basic word processing system to correct spelling or grammar, it is not necessary to disclose such usage of these tools in your Work.”

Please read the full policy and FAQ.

Papers submitted for consideration to FSE should not have been already published elsewhere and should not be under review or submitted for review elsewhere during the reviewing period. Specifically, authors are required to adhere to the ACM Policy and Procedures on Plagiarism and the ACM Policy on Prior Publication and Simultaneous Submissions.

To prevent double submissions, the chairs might compare the submissions with related conferences that have overlapping review periods. The double submission restriction applies only to refereed journals and conferences, not to unrefereed forums (e.g. arXiv.org). To check for plagiarism issues, the chairs might use external plagiarism detection software.

All publications are subject to the ACM Author Representations policy.

By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including ACM’s new Publications Policy on Research Involving Human Participants and Subjects.

Alleged violations to any of the above policies will be reported to ACM for further investigation and may result in a full retraction of your paper, in addition to other potential penalties, as per the ACM Publications Policies.

Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process if your paper is accepted. ACM has been involved in ORCID from the start and they have recently made a commitment to collect ORCID IDs from all published authors. ACM is committed to improve author discoverability, ensure proper attribution and contribute to ongoing community efforts around name normalization; your ORCID ID will help in these efforts.

The authors of accepted papers are invited and strongly encouraged to attend the conference to present their work. Attendance at the event is not mandatory for publication. Authors also have the option of not presenting their work at the conference, in which case they do not need to register.

Important Dates

All dates are 23:59:59 AoE (UTC-12h)

  • Paper registration: September 5, 2024 (to register a paper, paper title, abstract, author list, and some additional metadata are required; title and abstract must contain sufficient information for effective bidding; registrations containing empty or generic title and abstract may be dropped)
  • Full paper submission: September 12, 2024
  • Author response: November 22-26, 2024
  • Initial notification: January 14, 2024 (long discussion period due to year-end holidays)
  • Revised manuscript submissions (major revisions only): February 25, 2025
  • Final notification for major revisions: April 1, 2025
  • Camera ready: April 24, 2025

The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work. Please also note that the names and list of authors as well as the title in the camera-ready version cannot be modified from the ones in the submitted version unless there is explicit approval from the track chairs.

Open Science Policy

The research track of FSE has introduced an open science policy. Openness in science is key to fostering scientific progress via transparency, reproducibility, and replicability. The steering principle is that all research results should be accessible to the public, if possible, and that empirical studies should be reproducible. In particular, we actively support the adoption of open data and open source principles and encourage all contributing authors to disclose (anonymized and curated) data to increase reproducibility and replicability.

Upon submission to the research track, authors are asked to make a replication package available to the program committee (via upload of supplemental material or a link to a private or public repository) or to comment on why this is not possible or desirable. Furthermore, authors are asked to indicate whether they intend to make their data publicly available upon acceptance. We ask authors to provide a supporting statement on the availability of a replication package (or lack thereof) in their submitted papers in a section named Data Availability after the Conclusion section. Be careful that such statements continue to maintain author anonymity. For more details, see the FSE open science policy.

Authors of accepted papers will be given an opportunity (and encouragement) to submit their data and tools to the separate FSE’25 artifact evaluation committee.

Topics of Interest

Topics of interest include, but are not limited to:

  • Artificial intelligence and machine learning for software engineering
  • Autonomic computing
  • Debugging and fault localization
  • Dependability, safety, and reliability
  • Distributed and collaborative software engineering
  • Embedded software, safety-critical systems, and cyber-physical systems
  • Empirical software engineering
  • Human and social aspects of software engineering
  • Human-computer interaction
  • Mining software repositories
  • Mobile development
  • Model checking
  • Model-driven engineering
  • Parallel, distributed, and concurrent systems
  • Performance engineering
  • Program analysis
  • Program comprehension
  • Program repair
  • Program synthesis
  • Programming languages
  • Recommendation systems
  • Requirements engineering
  • Search-based software engineering
  • Services, components, and cloud
  • Software architectures
  • Software engineering education
  • Software engineering for machine learning and artificial intelligence
  • Software evolution
  • Software processes
  • Software security
  • Software testing
  • Software traceability
  • Symbolic execution
  • Tools and environments

FAQ on Review Process: Major Revisions, Open Science Policy, Double-Anonymous Reviewing

PACMSE Proceedings

Q: What paper format shall we follow for FSE 2025?

A: Papers accepted by the technical track of FSE 2025 will be published in the inaugural journal issue of the Proceedings of the ACM on Software Engineering (PACMSE). Approval has been granted by ACM in late July. Please check the Research Paper How to Submit section for details.

Q: How would the inaugural PACMSE journal affect FSE 2025?

A: FSE will be published in the inaugural PACMSE journal following the recent practices of other communities such as PACMPL (PLDI, POPL, OOPSLA, etc.), PACMHCI, PACMMOD, PACMNET, etc.

Identity: FSE papers will be published in a dedicated issue of PACMSE, with FSE as the issue name. This means that FSE papers will keep their identity!

Paper format: The paper format will follow the ACM’s requirement. This is a switch from the traditional FSE two-column format to this new PACMSE single-column format. However, the amount of content should remain more or less the same: the FSE 2025’s 18 page limit in the singe-column format maps roughly to the old single-column of 10 pages.

Review process: FSE already has a major-revision cycle in 2023 and 2024, which maps neatly onto PACMSE’s requirements for two rounds of reviews, so there are no PACMSE-related changes here.

Conference presentations: FSE 2025’s move to PACMSE changes how the proceedings are published. All accepted papers will still be guaranteed presentation delivery at the conference in the usual way.

Policy on Authorship (e.g., regarding ChatGPT)

Q: What is the policy on Authorship, especially considering the use of Generative AI tools and technologies, such as ChatGPT?

A: Submissions must follow the “ACM Policy on Authorship” released April 20, 2023, which contains policy regarding the use of Generative AI tools and technologies, such as ChatGPT. Please also check the ACM FAQ which describes in what situations generative AI tools can be used (with or without acknowledgment).

Major Revision Process

Q: Why is FSE allowing major revisions?

A: SE conferences are currently forced to reject papers that include valuable material, but would need major changes to become acceptable for conference presentation, because major revisions cannot be accommodated in the current review process. By supporting only a binary outcome, conferences force reviewers to decide between rejection and acceptance even in borderline cases that would be better judged after a round of major revision. This can cause additional reviewing burden for the community (the paper is resubmitted to another venue with new reviewers) and inconsistency for the authors (the new reviewers have different opinions). We hope by allowing major revisions to both increase the acceptance rate of FSE and to help reduce these current problems with the reviewing process.

For Authors

Q: If my paper receives major revisions, what happens next?

A: The meta-review will clearly and explicitly list all major changes required by the reviewers to make the paper acceptable for publication. Authors of these papers are granted 6 weeks to implement the requested changes. In addition to the revised paper, authors are asked to submit a response letter that explains how each required change was implemented. If any change was not implemented, authors can explain why. The same reviewers will then review the revised paper and make their final (binary) decision. Authors can also choose to withdraw their submission if they wish.

Q: Will major revision become the default decision causing initial acceptance rates to drop?

A: This is not the intention: reviewers are instructed to accept all papers that would have been accepted when major revision was not an available outcome.

For Reviewers

Q: When shall I recommend major revision for a paper?

A: Major revision should not become the default choice for borderline papers and should be used only if without major revisions the paper would be rejected, while a properly done major revision, which addresses the reviewers’ concerns, could make the paper acceptable for publication; if the requested changes are doable in 6 weeks and are implementable within the page limit; if the requested changes are strictly necessary for paper acceptance (i.e., not just nice-to-have features); if the requested changes require recheck (i.e., reviewers cannot trust the authors to implement them directly in the camera ready).

Q: When shall I recommend rejection instead of major revision?

A: Rejection is a more appropriate outcome than major revision if the requested additions/changes are not implementable in 6 weeks; if the contribution is very narrow or not relevant to the SE audience, and it cannot be retargeted in 6 weeks; if the methodology is flawed and cannot be fixed in 6 weeks; if results are unconvincing, the paper does not seem to improve the state of the art much, and new convincing results are unlikely to be available after 6 weeks of further experiments; if the customary benchmark used in the community was ignored and cannot be adopted and compared to in 6 weeks.

Q: When shall I recommend acceptance instead of major revision?

A: We do not want major revision to become the primary pathway for acceptance. We should continue to trust the authors to make minor changes to the submissions in the camera ready version. Acceptance is preferable if the requested additions/changes are nice to have features, not mandatory for the acceptability of the work; if minor improvements of the text are needed; if minor clarifications requested by the reviewers should be incorporated; if important but not critical references should be added and discussed; if discussion of results could be improved, but the current one is already sufficient.

Q: What is the difference between major revision and shepherding?

A: Major revision is not shepherding. While shepherding typically focuses on important but minor changes, which can be specified in an operational way and can be checked quite easily and quickly by reviewers, major revisions require major changes (although doable in 6 weeks), which means the instructions for the authors cannot be completely operational and the check will need to go deeply into the new content delivered by the paper. Hence, while the expectation for shepherded papers is that most of them will be accepted once the requested changes are implemented, this is not necessarily the case with major revisions.

Q: Is there a quota of papers that can have major revision as outcome? A: As there is no quota for the accepted papers, there is also no quota for major revisions. However, we expect that thanks to major revisions we will be able to eventually accept 10-15% more papers, while keeping the quality bar absolutely unchanged.

Q: What shall I write in the meta-review of a paper with major revision outcome?

A: With the possibility of a major revision outcome, meta-reviews become extremely important. The meta-review should clearly and explicitly list all major changes required by the reviewers to make the paper acceptable for publication. The meta-review should act as a contract between reviewers and authors, such that when all required changes are properly made, the paper is accepted. In this respect, the listed changes should be extremely clear, precise, and implementable.

Review Process

For Authors

Q: Can I withdraw my paper?

A: Yes, papers can be withdrawn at any time using HotCRP.

Q: Is appendix or other supplemental materials allowed?

A: The main submission file must follow the page limit. Any supplemental materials including appendix and replication packages must be submitted separately under “Supplemental Material”. Program Committee members can review supplemental materials but are not obligated to review them.

For Reviewers

Q: The authors have provided a URL to supplemental material. I would like to see the material but I worry they will snoop my IP address and learn my identity. What should I do?

A: Contact the Program Co-Chairs, who will download the material on your behalf and make it available to you.

Q: If I am assigned a paper for which I feel I am not an expert, how do I seek an outside review?

A: PC members should do their own reviews, not delegate them to someone else. Please contact the Program Co-Chairs, especially since additional reviewers might have a different set of conflicts of interest.

Open Science Policy

Q: What is the FSE 2025 open science policy and how can I follow it?

A: Openness in science is key to fostering scientific progress via transparency, reproducibility, and replicability. Upon submission to the research track, authors are asked to:

  • make their data available to the program committee (via upload of supplemental material or a link to an anonymous repository) and provide instructions on how to access this data in the paper; or
  • include in the paper an explanation as to why this is not possible or desirable; and
  • indicate if they intend to make their data publicly available upon acceptance. This information should be provided in the submitted papers in a section named Data Availability after the Conclusion section. For more details, see the FSE open science policy. Q: How can I upload supplementary material via the HotCRP site and make it anonymous for double-anonymous review?

A: To conform to the double-anonymous policy, please include an anonymized URL. Code and data repositories may be exported to remove version control history, scrubbed of names in comments and metadata, and anonymously uploaded to a sharing site. Instructions are provided in the FSE open science policy.

Double-Anonymous Reviewing (DAR)

Q: Why are you using double-anonymous reviewing?

A: Studies have shown that a reviewer’s attitude toward a submission may be affected, even unconsciously, by the identity of the authors.

Q: Do you really think DAR actually works? I suspect reviewers can often guess who the authors are anyway.

A: It is rare for authorship to be guessed correctly, even by expert reviewers, as detailed in this study.

For Authors

Q: What exactly do I have to do to anonymize my paper?

A: Your job is not to make your identity undiscoverable but simply to make it possible for reviewers to evaluate your submission without having to know who you are: omit authors’ names from your title page, and when you cite your own work, refer to it in the third person. Also, be sure not to include any acknowledgements that would give away your identity. You should also avoid revealing the institutional affiliation of authors.

Q: I would like to provide supplementary material for consideration, e.g., the code of my implementation or proofs of theorems. How do I do this?

A: On the submission site, there will be an option to submit supplementary material along with your main paper. You can also share supplementary material in a private or publicly shared repository (preferred). This supplementary material should also be anonymized; it may be viewed by reviewers during the review period, so it should adhere to the same double-anonymous guidelines. See instructions on the FSE open science policy.

Q: My submission is based on code available in a public repository. How do I deal with this?

A: Making your code publicly available is not incompatible with double-anonymous reviewing. You can create an anonymized version of the repository and include a new URL that points to the anonymized version of the repository, similar to how you would include supplementary materials to adhere to the Open Science policy. Authors wanting to share GitHub repositories may want to look into using https://anonymous.4open.science/ which is an open source tool that helps you to quickly double-anonymize your repository.

Q: I am building on my own past work on the WizWoz system. Do I need to rename this system in my paper for purposes of anonymity, so as to remove the implied connection between my authorship of past work on this system and my present submission? A: Maybe. The core question is really whether the system is one that, once identified, automatically identifies the author(s) and/or the institution. If the system is widely available, and especially if it has a substantial body of contributors and has been out for a while, then these conditions may not hold (e.g., LLVM or HotSpot), because there would be considerable doubt about authorship. By contrast, a paper on a modification to a proprietary system (e.g., Visual C++, or a research project that has not open-sourced its code) implicitly reveals the identity of the authors or their institution. If naming your system essentially reveals your identity (or institution), then anonymize it. In your submission, point out that the system name has been anonymized. If you have any doubts, please contact the Program Co-Chairs.

Q: I am submitting a paper that extends my own work that previously appeared at a workshop. Should I anonymize any reference to that prior work?

A: No. But we recommend you do not use the same title for your FSE submission, so that it is clearly distinguished from the prior paper. In general, there is rarely a good reason to anonymize a citation. When in doubt, contact the Program Co-Chairs.

Q: Am I allowed to post my (non-anonymized) paper on my web page or arXiv?

A: You can discuss and present your work that is under submission at small meetings (e.g., job talks, visits to research labs, a Dagstuhl or Shonan meeting), but you should avoid broadly advertising it in a way that reaches the reviewers even if they are not searching for it. Whenever possible, please avoid posting your manuscript on public archives (e.g, ArXiV) before or during the submission period. Would you still prefer to do so, carefully avoid adding to the manuscript any reference to FSE 2025 (e.g., using footnotes saying “Submitted to FSE 2025”).

Q: Can I give a talk about my work while it is under review? How do I handle social media?

A: We have developed guidelines, described here, to help everyone navigate in the same way the tension between the normal communication of scientific results, which double-anonymous reviewing should not impede, and actions that essentially force potential reviewers to learn the identity of the authors for a submission. Roughly speaking, you may (of course!) discuss work under submission, but you should not broadly advertise your work through media that is likely to reach your reviewers. We acknowledge there are grey areas and trade-offs; we cannot describe every possible scenario.

Things you may do:

  • Put your submission on your home page.
  • Discuss your work with anyone who is not on the review committees, or with people on the committees with whom you already have a conflict.
  • Present your work at professional meetings, job interviews, etc.
  • Submit work previously discussed at an informal workshop, previously posted on arXiv or a similar site, previously submitted to a conference not using double-anonymous reviewing, etc.

Things you should not do:

  • Contact members of the review committees about your work, or deliberately present your work where you expect them to be.
  • Publicize your work on major mailing lists used by the community (because potential reviewers likely read these lists).
  • Publicize your work on social media if wide public [re-]propagation is common (e.g., Twitter) and therefore likely to reach potential reviewers. For example, on Facebook, a post with a broad privacy setting (public or all friends) saying, “Whew, FSE paper in, time to sleep” is okay, but one describing the work or giving its title is not appropriate. Alternatively, a post to a group including only the colleagues at your institution is fine.

Reviewers will not be asked to recuse themselves from reviewing your paper unless they feel you have gone out of your way to advertise your authorship information to them. If you are unsure about what constitutes “going out of your way”, please contact the Program Co-Chairs.

Q: Will the fact that FSE is double-anonymous have an impact on handling conflicts of interest?

A: Double-anonymous reviewing does not change the principle that reviewers should not review papers with which they have a conflict of interest, even if they do not immediately know who the authors are. Authors declare conflicts of interest when submitting their papers using the guidelines in the Call for Papers. Papers will not be assigned to reviewers who have a conflict. Note that you should not declare gratuitous conflicts of interest and the chairs will compare the conflicts declared by the authors with those declared by the reviewers. Papers abusing the system will be desk-rejected.

For Reviewers

Q: What should I do if I learn the authors’ identity? What should I do if a prospective FSE author contacts me and asks to visit my institution? A: If you feel that the authors’ actions are largely aimed at ensuring that potential reviewers know their identity, contact the Program Co-Chairs. Otherwise, you should not treat double-anonymous reviewing differently from other reviewing. In particular, refrain from seeking out information on the authors’ identity, but if you discover it accidentally this will not automatically disqualify you as a reviewer. Use your best judgement.

Q: How do we handle potential conflicts of interest since I cannot see the author names?

A: The conference review system will ask that you identify conflicts of interest when you get an account on the submission system.

Q: How should I avoid learning the authors’ identity, if I am using web-search in the process of performing my review?

A: You should make a good-faith effort not to find the authors’ identity during the review period, but if you inadvertently do so, this does not disqualify you from reviewing the paper. As part of the good-faith effort, please turn off Google Scholar auto-notifications. Please do not use search engines with terms like the paper’s title or the name of a new system being discussed. If you need to search for related work you believe exists, do so after completing a preliminary review of the paper.

The above guidelines are partly based on the PLDI FAQ on double-anonymous reviewing and the ICSE 2023 guidelines on double-anonymous submissions.

:
: