FSE 2026
Sun 5 - Thu 9 July 2026 Montreal, Canada

Accepted Papers

Title
Accelerating Policy Synthesis in Large-Scale MDPs via Hierarchical Adaptive Refinement
Research Papers
AccessDroid: Detecting Screen Reader Accessibility Issues in Android Applications via Semantics Trees
Research Papers
AccessRefinery: Fast Mining Concise Access Control Intents on Public Cloud
Research Papers
ACME: Automated Clause Mapping Engine for Testing Emerging Database Systems
Research Papers
Active Learning of Symbolic Automata for Reactive Programs via Dynamic Symbolic Mapper
Research Papers
AdaDec: A Uncertainty-Guided Lookahead Decoding Framework for LLM-based Code Generation
Research Papers
Adaptive Mutation Scheduling with Deep Reinforcement Learning for Smart Contract Fuzzing
Research Papers
AgentBound: Securing Execution Boundaries of AI Agents
Research Papers
Agentic Verification of Software Systems
Research Papers
A Grounded Theory of Debugging in Professional Software Engineering Practice
Research Papers
Aligning with Human Coding Preferences for Improving Code Generation
Research Papers
An Empirical Study of Fuzz Harness Degradation
Research Papers
A Tuple-Oriented Sampling Method for Generating Small Pairwise Covering Arrays in Configurable Software Systems
Research Papers
Automated Detection of Configuration-Specific Security Vulnerabilities via Patch Analysis
Research Papers
Automated Knowledge-Aware Test Reuse
Research Papers
Automated Repair of Requirements for Cyber-Physical Systems in Simulink Requirements Tables
Research Papers
Automated Repair of TEE Partitioning Issues via DSL-Guided and LLM-Assisted Patching
Research Papers
Automating Dockerfile Refactoring to Multi-Stage Builds
Research Papers
A Wily Hare Has Three Havens: Combating Programmable Logic Controller Attacks via Virtualization Redundancy
Research Papers
BackportBench: A Multilingual Benchmark for Automated Patch Backporting
Research Papers
Balancing Latency and Accuracy of Code Completion via Local-Cloud Model Cascading
Research Papers
Bash-Commenter: Leveraging Syntax-Aware Preference Optimization to Reinforce Large Language Model for Bash Code Comment Generation
Research Papers
Behind Defective Mobile AR Apps: Studying Reviews and Bugs of Android AR Software with Comparison to Prior Bug Studies
Research Papers
Beyond Language Boundaries: Uncovering Programming Language Families for Code Language Models
Research Papers
Binvariants: Enhancing Fuzzing of Closed-source Binary Executables via Register-level Likely Invariants
Research Papers
Boosting LLMs for Mutation Generation
Research Papers
Break to Adapt: Knowledge-Based Updates of Breaking Dependencies in JavaScript
Research Papers
Bringing Managed Language Support to WebAssembly with External Library Linking
Research Papers
Building Software by Rolling the Dice: A Qualitative Study of Vibe Coding
Research Papers
Can Old Tests do New Tricks for Resolving SWE Issues?
Research Papers
Carbon-Taxed Transformers: A Green Compression Pipeline for Overgrown Language Models
Research Papers
Cascaded Code Editing: Large-Small Model Collaboration for Effective and Efficient Code Editing
Research Papers
CASCADE: Detecting Inconsistencies between Code and Documentation with Automatic Test Generation
Research Papers
Casting a SPELL: Sentence Pairing Exploration for LLM Limitation-breaking
Research Papers
CertiCoder: Towards MISRA-Compliant C Code Generation with LLMs
Research Papers
ChainDelta: Automatic Patch-based Exploit Generation for Ethereum with Fuzzing Agents
Research Papers
Characterizing and Mitigating False-Positive Bug Reports in the Linux Kernel
Research Papers
Characterizing Trust Boundary Vulnerabilities in TEE Container Systems: An Empirical Study
Research Papers
Chiseling Out Efficiency: Structured Skeleton Supervision for Efficient Code Generation
Research Papers
Clotho: Measuring Task-Specific Pre-Generation Test Adequacy for LLM Inputs
Research Papers
CodeCureAgent: Automatic Classification and Repair of Static Analysis Warnings
Research Papers
Pre-print
Coding in a Bubble? Evaluating LLMs in Resolving Context Adaptation Bugs During Code Adaptation
Research Papers
Co-Evolution of Types and Dependencies: Towards Repository-Level Type Inference for Python Code
Research Papers
Comment Traps: How Defective Commented-out Code Augment Defects in AI-Assisted Code Generation
Research Papers
Compiling Code LLMs into Lightweight Executables
Research Papers
Cost-Effective Testing of MPC Compilers
Research Papers
CrossFit: Demystifying VM Callback Bugs in Interpreters
Research Papers
Cross-Refactoring-Type Test Program Migration for Refactoring Engines
Research Papers
CrypFormBench: Benchmarking Formal Analysis Capability of Large Language Models for Cryptographic schemes
Research Papers
CuFuzz: An API-Knowledge-Graph Coverage-Driven Fuzzing Framework for CUDA Libraries
Research Papers
Debugging Engine Enhanced by Prior Knowledge: Can We Teach LLM How to Debug?
Research Papers
DECODE: Dynamic Exploration for Constraint-Guided Vulnerability Discovery in Deep Learning Operators
Research Papers
Denoising Fault Localization with Test Line Proximity
Research Papers
Pre-print
Deployability-Centric Infrastructure-as-Code Generation: Fail, Learn, Refine, and Succeed through LLM-Empowered DevOps Simulation
Research Papers
Detecting Bugs in Rust Compiler Fix Suggestions via Constraint-Violation-Guided Mutation
Research Papers
Detecting Code-Comment Inconsistencies in Smart Contracts by Combining LLM and Program Analysis
Research Papers
DiverFPS: Generating Diverse Solutions for Floating-Point SMT Formulas
Research Papers
Do Not Treat Code as Natural Language: Implications for Repository-Level Code Generation and Beyond
Research Papers
DOI Pre-print
DualCodeDetect: Zero-Shot LLM-Generated Code Detection via Dual-Channel Perturbation
Research Papers
DuCodeMark: Dual-Purpose Code Dataset Watermarking via Style-Aware Watermark–Poison Design
Research Papers
EfficientUICoder: A Bidirectional Token Compression Framework for Efficient MLLM-based UI Code Generation
Research Papers
Eidolon: Perform Noise-Aware Fuzzing on FHE Libraries via Equivalence Expression Transformation
Research Papers
Empirical Insights of Test Selection Metrics under Multiple Testing Objectives and Distribution Shifts
Research Papers
Empowering Autonomous Debugging Agents with Efficient Dynamic Analysis
Research Papers
Evaluating LLM-based Regression Test Generation
Research Papers
Evaluating Risk and Confidence in Performance Bounds of Configuration Sampling Strategies
Research Papers
EventADL: Open-Box Anomaly Detection and Localization Framework for Events in Cloud-Based Service Systems
Research Papers
Event-B Agent: Towards LLM Agent for Formal Model Synthesis and Repair
Research Papers
Exorcist: Enabling Atomic-Level Runtime Detection of Spectre Attacks Using Precise Event Based Sampling
Research Papers
ExpeRepair: Dual-Memory Enhanced LLM-based Repository-Level Program Repair
Research Papers
Failing with Purpose: Dangling Coverage-Guided Negative Test Generation from a Mechanized P4 Type System
Research Papers
Failure-Based Testing for Deep Reinforcement Learning Agents
Research Papers
Fairness Testing of Large Language Models in Role-Playing
Research Papers
Feature Slice Matching for Precise Bug Detection
Research Papers
Flash: Query-Efficient Black-Box Static Malware Evasion through Transferable GAN-Guided Modification Sequences
Research Papers
Fool Me If You Can: On the Robustness of Binary Code Similarity Detection Models against Semantics-preserving Transformations
Research Papers
From Particles to Perils: SVGD-Based Hazardous Scenario Generation for Autonomous Driving Systems Testing
Research Papers
From Specifications to Implementation in the Gen-AI Era: Lessons from a Project-based Software Engineering Course
Research Papers
From Suspicious Signals to Crashes: Guiding Bug-driven GUI Testing via Code-inspired Tracing
Research Papers
GadgetHunter: Region-Based Neuro-Symbolic Detection of Java Deserialization Vulnerabilities
Research Papers
GAER: Graph Auto-Encoders for Unsupervised Software Architecture Recovery
Research Papers
Generalizing Test Cases for Comprehensive Test Scenario Coverage
Research Papers
GPU-Accelerated Flow-Sensitive Pointer Analysis for C/C++ Programs
Research Papers
GraphLocator: Graph-guided Causal Reasoning for Issue Localization
Research Papers
GraphQLify: Automated and Type Safety-Preserving GraphQL API Adoption
Research Papers
GREClue: Failure Indexing with Graph-based Failure Representation and Entropy-based Deep Clustering
Research Papers
GUIMigrator: Semantics-Preserving Transpilation from Android XML to Compose and SwiftUI
Research Papers
Hallucinations in LLM-based Code Summarization: Unveiling, Detection, and Mitigation
Research Papers
How Do Developers Interact with AI? An Exploratory Study on Modeling Developer Programming Behavior
Research Papers
How Low Can You Go? The Data-Light SE Challenge
Research Papers
iCoRe: An Iterative Correlation-Aware Retriever for Bug Reproduction Test Generation
Research Papers
Improving Data Leakage Detection in Machine Learning Notebooks through Static Slicing and Structured LLM Prompts
Research Papers
In Bugs We Trust? On Measuring the Randomness of a Fuzzer Benchmarking Outcome
Research Papers
InDe-LLM: Defending Against Jailbreak Attacks in LLM-Powered Systems via Intention Disentangling
Research Papers
Influence-Aware Bayesian-Inspired Token Reweighting for Improved Code Generation
Research Papers
In Line with Context: Repository-Level Code Generation via Context Inlining
Research Papers
IntentTester: Intent-Driven Multi-Agent Framework for Cross-Library Test Migration
Research Papers
Interrogation Testing of CHC Solvers
Research Papers
It Takes Two: Option-Aware Directed Greybox Fuzzing for Vulnerability PoC Generation
Research Papers
JavaScript Pointer Analysis with Adaptive Heap Abstraction
Research Papers
Knowledge-Graph-Driven Data Synthesis for Low-Resource Software Development: A HarmonyOS Case Study
Research Papers
Large Language Models for Opaque Predicate Resolution: A Universal Control Flow Deobfuscation Framework
Research Papers
LinkAnchor: An Autonomous LLM-Based Agent for Issue-to-Commit Link Recovery
Research Papers
LLM-Assisted Input-Requirement-Aware Differential Testing of Array Programming Frameworks
Research Papers
LoCaL: Countering Surface Bias in Code Evaluation Metrics
Research Papers
Look Before You Leap: Context-Sensitive GUI Grounding for Boosting Automated Extended Reality (XR) Testing
Research Papers
MetaRCA: A Generalizable Root Cause Analysis Framework for Cloud-Native Systems Powered by Meta Causal Knowledge
Research Papers
Mining Long Tail Bugs: Identifying Rare and Overlooked Issues in Code
Research Papers
Mitigating Prompt-Induced Cognitive Biases in General-Purpose AI for Software Engineering
Research Papers
Mitigating the Risk of Defects and Improving Knowledge Distribution with Code Reviewer Recommenders
Research Papers
MR-Coupler: Automated Metamorphic Test Generation via Functional Coupling Analysis
Research Papers
Multi-LLM Persona Generation for Virtual Focus Groups in Software Engineering: A Controlled, Multi-Domain Study of Emotional Requirements Elicitation
Research Papers
Natural Language-Focused Software Engineering via Code-Documentation Equivalence
Research Papers
NESA: Relational Neuro-Symbolic Static Program Analysis
Research Papers
Neuron-Guided Interpretation of Code LLMs: Where, Why, and How?
Research Papers
Not All RAGs Are Created Equal: A Component-Wise Empirical Study for Software Engineering Tasks
Research Papers
OCPPuzz: Specification-driven Fuzzing of Charging Station Management Systems with Large Language Model
Research Papers
OdoTest: An Automated Testing Approach for Odometry Systems
Research Papers
Odyssey : Hunting Smart Contract Vulnerabilities with Fine-grained State Modeling and Exploration
Research Papers
One Size Does Fit All: Exploring Model Fusion for Software Engineering Tasks
Research Papers
One Size Does Not Fit All: Revisiting Code Context Engineering for Repository-Level Code Generation
Research Papers
On the Road to Personalized Code Intelligence: Portraiting and Assisting Developers Based on Their In-IDE Behaviors
Research Papers
Phantom Rendering Detection: Identifying and Analyzing unnecessary UI computations
Research Papers
Pig: Leveraging Large Language Models for Python Library Migrations
Research Papers
PlayCoder: Making LLM-Generated GUI Code Playable
Research Papers
PoCGen: Generating Proof-of-Concept Exploits for Vulnerabilities in Npm Packages
Research Papers
pPatch: Automated Vulnerability Unpatching
Research Papers
Precondition Synthesis for Deep Neural Networks with Statistical Guarantees
Research Papers
PROGnosticator: Testing Source-to-Source Code Translators via Construct-oriented Fuzzing
Research Papers
Project-Level C-to-Rust Translation via Pointer Knowledge Graphs
Research Papers
ProofFusion: Improving Neural Theorem Proving via Adaptive Retrieval-Augmented Reasoning
Research Papers
Property Refinement in Linear Temporal Logic: Formal Semantics and Algorithms for Software Verification
Research Papers
Protocol Reverse Engineering via Deep Transfer Learning
Research Papers
PuzzleMark: Implicit Jigsaw Learning for Robust Code Dataset Watermarking in Neural Code Completion Models
Research Papers
QuanForge: A Mutation Testing Framework for Quantum Neural Networks
Research Papers
RAT: Retrieval-Augmented Testing of Certificate Revocation List Parsers in TLS Implementations
Research Papers
RealBench: A Repo-Level Code Generation Benchmark Aligned with Real-World Software Development Practices
Research Papers
Recommending Usability Improvements with Multimodal Large Language Models
Research Papers
ReDef: Do Code Language Models Truly Understand Code Changes for Just-in-Time Software Defect Prediction?
Research Papers
DOI Pre-print Media Attached
Red Teaming LLMs via Linguistic-Aware Fuzzing
Research Papers
Reducing Cost of LLM Agents with Trajectory Reduction
Research Papers
Pre-print
Reducing Coverage-Equivalent Inputs in Grammar-based Fuzzing by Avoiding Recurrent Rule Sequences
Research Papers
Reducing the TCB of SGX-oriented LibOSes at Runtime
Research Papers
ReFLAIR: Detecting Responsive Layout Reflow Issues using Multimodal Generative AI
Research Papers
ReGA: Model-based Safeguard for LLMs via Representation-Guided Abstraction
Research Papers
RepoReasoner: Evaluating Repository-Level Code Reasoning Ability of Long-Context Language Models
Research Papers
Rethinking the Evaluation of Microservice RCA with a Fault Propagation-Aware Benchmark
Research Papers
Revealing Regressions: A Comparative Study of State-Capture Strategies in Validating Program Behavior
Research Papers
Reward-Free Code Alignment from Pretrained or Fine-Tuned LLM: Unpacking the Trade-offs for Code Generation
Research Papers
Satisfiability Solving with LLMs
Research Papers
SBridge: Identifying Source-to-Binary Function Similarity via Cross-Domain Control Block Matching
Research Papers
ScanCoder: Leveraging Human Attention Patterns to Enhance LLMs for Code
Research Papers
Semantics-Guided Control-Flow Reconstruction for Firmware Binaries via Static Analysis
Research Papers
Small is Beautiful: A Practical and Efficient Log Parsing Framework
Research Papers
SmartCoder-R1: Towards Secure and Explainable Smart Contract Generation with Security-Aware Group Relative Policy Optimization
Research Papers
SmartDispatch: Dynamic Substitution of NumPy-style APIs on Heterogenous CPU-GPU Systems
Research Papers
SmartIFSyn: Automated Information Flow Security Policy Synthesis for Smart Contracts
Research Papers
SmarTrim: Symbolic Execution for Smart Contracts Powered by Redundant Transaction-Sequence Pruning
Research Papers
SnakeCharmer: Automatic Fuzzing Harness Generation for Pure and Hybrid Python Libraries
Research Papers
Sound Termination and Non-Termination Analysis of C Programs with Bit-Precise Bounded Semantics and Advanced Constructs
Research Papers
Spectrum-based Failure Attribution for Multi-Agent Systems
Research Papers
Speculate: Generating REST API Specifications Using LLMs
Research Papers
SpecWeaver: End-to-End HTTP API Specification Inference Across Multi-Layer Routing in Production Web Services
Research Papers
SQLiFuzz: Uncover SQL Injection in Any Web Applications
Research Papers
StepFly: Agentic Troubleshooting Guide Automation for Incident Diagnosis
Research Papers
Still Manual? Automated Linter Configuration via DSL-Based LLM Compilation of Coding Standards
Research Papers
Structure-Aware Delta Debugging with Geometric-Information Weights
Research Papers
SwarmBox: A Plug-and-Play Drone Swarm Framework for Streamlined Development and Comprehensive Analysis
Research Papers
SWE Data Construction, Automatically!
Research Papers
SWR-Bench: Assessing LLM Performance in Real-World Code Review Comment Generation
Research Papers
TestTailor: Generating High-Coverage Tests via Path-Proximal Tests with LLMs
Research Papers
The Interaction of Complexity and Provenance in Code Review Decisions: Evidence from a Controlled Experiment
Research Papers
Thought is All You Need: Smart Contract Vulnerability Detection with Thought-Augmented Large Language Model
Research Papers
Three Heads Are Better Than One: A Multi-Perspective Reasoning Framework for Enhanced Vulnerability Detection
Research Papers
TLR: Codebase-Level C Memory Management Error Repair with Large Language Models
Research Papers
TORAI: Multi-Source Root Cause Analysis for \textit{Blind Spots} in Microservice Service Call Graph
Research Papers
Towards Automated Crowdsourced Testing via Personified-LLM
Research Papers
Towards Automated Smart Contract Generation: Evaluation, Benchmarking, and Retrieval-Augmented Repair
Research Papers
Towards Secure Logging: Characterizing and Benchmarking Logging Code Security Issues with LLMs
Research Papers
Towards the Localization of Multi-Root-Cause Failures in Microservice Systems: An Active Intervention Framework
Research Papers
ToxiShield: Promoting Inclusive Developer Communication through Real-Time Toxicity Filtering
Research Papers
TransAgent: Enhancing LLM-Based Code Translation via Fine-Grained Execution Alignment
Research Papers
TransLibEval: Demystify Large Language Models’ Capability in Third-party Library-targeted Code Translation
Research Papers
TSGuard: Automated User-Centric Incident Diagnosis for AI Workloads in the Cloud
Research Papers
TUSR: A Test Unit–Based Framework for Repairing Obsolete GUI Test Scripts
Research Papers
Two-Level Adaptation for Budget-Constrained Continuous Dynamic Dependence Analysis
Research Papers
TypePro: Boosting LLM-Based Type Inference via Inter-Procedural Slicing
Research Papers
Uncovering Similar but Different Packages in PyPI and Potential Security Threats
Research Papers
Understanding and Predicting Accepted Code Suggestions in AI-Assisted Programming
Research Papers
Understanding Binary Code Similarity for Real-World Vulnerability Detection: A Large-Scale Empirical Study
Research Papers
Understanding Code Similarity across Instruction Set Architectures: An Empirical Study
Research Papers
Understanding, Detecting, and Repairing Real-World In-Context-Learning-Based Text-to-SQL Errors
Research Papers
Understanding Performance Problems in CUDA Programs
Research Papers
Understanding the Limitations of C/C++ Binary Third-Party Library Detection Tool: An Empirical Study at Scale
Research Papers
Unfulfilled Promises: LLM-Based Detection of OS Compatibility Issues in Infrastructure as Code
Research Papers
UNICS: Multilingual Code Search via Unified Pseudocode and Contrastive Transfer Learning
Research Papers
Unleashing HPC Application Performance through Software Deployment: A Joint Model of Software Parallelism and Co-location
Research Papers
Unveiling AI-Driven Web Applications: Insights into Characteristics, Functionality, and Compliance
Research Papers
Unveiling the Fragility of Binary Code Similarity Detection via Targeted Attacks with Model Explanations
Research Papers
V2E: Validating Smart Contract Vulnerabilities through Profit-driven Exploit Generation and Execution
Research Papers
Validating LLM-Generated SQL Queries Through Metamorphic Prompting
Research Papers
Verifying Smart Contract Security Against Re-entrancy Attacks through Relational Value Analysis
Research Papers
Verifying Structural Robustness of Deep Neural Network
Research Papers
VerilogASTBench: Benchmark Construction of Verilog AST Dataset with Dual-Stage AST Semantic Enhancement Framework
Research Papers
ViBR: Automated Bug Replay from Video-based Reports Using Vision-Language Models
Research Papers
VisionScratch: LLM-Based Automated Feedback Generation Using Code-Produced Videos for Scratch Programs
Research Papers
VulInstruct: Teaching LLMs Root-Cause Reasoning for Vulnerability Detection via Security Specifications
Research Papers
VulKey: Automated Vulnerability Repair Guided by Domain-Specific Repair Patterns
Research Papers
WalleTruth: Visual-oriented Software Testing for Web3 Wallet Browser Extensions
Research Papers
WebTestPilot: Agentic End-to-End Web Testing against Natural Language Specification by Inferring Oracles with Symbolized GUI Elements
Research Papers
When Shared Worlds Break: Demystifying Defects in Multi-User Extended Reality Software Systems
Research Papers

Call for Papers

We invite high-quality submissions, from both industry and academia, describing original and unpublished results of theoretical, empirical, conceptual, and experimental software engineering research.

Contributions should describe innovative and significant original research. Papers describing groundbreaking approaches to emerging problems are also welcome, as well as replication papers. Submissions that facilitate reproducibility by using available datasets or making the described tools and datasets publicly available are especially encouraged. For a list of specific topics of interest, please see the end of this call.

Note #1: The Proceedings of the ACM on Software Engineering (PACMSE) Issue FSE 2026 seeks contributions through submissions in this track. Accepted papers will be invited for presentation at FSE 2026. Approval has been granted by ACM in July 2023. PACMSE will be the only proceedings where accepted research track papers will be published. Please check the FAQ for details.

Note #2: The steering committee has decided that starting from 2024 the conference name will be changed to ACM International Conference on the Foundations of Software Engineering (FSE).

Note #3: Based on the coordination among FSE, ICSE, and ASE steering committees, the FSE conference and submission dates have been moved earlier, similarly to the FSE 2025 deadlines. The intention is for this schedule to remain stable in the years ahead and the conference and submission deadlines of the three large general software engineering conferences to be spread out throughout the year.

Note #4: Submissions must follow the “ACM Policy on Authorship” released April 20, 2023, which contains policy regarding the use of Generative AI tools and technologies, such as ChatGPT. Please also check the ACM FAQ which describes in what situations generative AI tools can be used (with or without acknowledgement).

Note #5: The names and list of authors as well as the title in the camera-ready version cannot be modified from the ones in the submitted version unless there is explicit approval from the track chairs.

Note #6: Submissions that change the required submission format to gain additional space will be desk rejected. Examples of changing format include removing the ACM Reference block and the permission to make digital or hard copy footnotes on the first page.

Tracks

This CFP refers to the Research Track of FSE 2026. For the remaining tracks, please check the specific calls on the website: https://conf.researchr.org/home/fse-2026

HOW TO SUBMIT

The following only applies to the main track of FSE. For the other tracks please see the general formatting instructions.

At the time of submission, each paper should have no more than 18 pages for all text and figures, plus 4 pages for references. Major revisions should have no more than 20 pages for all text and figures, plus 4 pages for references. Papers should use the following templates: Latex or Word (Mac) or Word (Windows). Please consult the ACM proceedings website for more information about the latest versions of the various templates (https://www.acm.org/publications/proceedings-template). Authors using LaTeX should use the sample-acmsmall-conf.tex file (found in the samples folder of the acmart package) with the acmsmall option. We also strongly encourage the use of the review, screen, and anonymous options as well. In summary, you want to use:

\documentclass[acmsmall,screen,review,anonymous]{acmart}

Papers may use either numeric or author-year format for citations. It is a single-column page layout. Submissions that do not comply with the above instructions will be desk rejected without review. Papers must be submitted electronically through the FSE 2026 submission site:

https://fse2026.hotcrp.com

Each submission will be reviewed by at least three members of the program committee. The initial decision can be accept, reject or major revision. When the initial decision is major revision, authors will have an opportunity to address all of the reviewers’s requests including but not restricted to any meta-review available during a 8-week major revision period. Such requests may include additional experiments or new analyses of existing results; major rewriting of algorithms and explanations; clarifications, better scoping, and improved motivations. The revised submission must be accompanied by a response letter, where the authors explain how they addressed each concern expressed by the reviewers. The same reviewers who requested major revisions will then assess whether the revised submission satisfies their requests adequately. Submissions will be evaluated on the basis of originality, importance of contribution, soundness, evaluation (if relevant), quality of presentation, and appropriate comparison to related work. Some papers may have more than three reviews, as PC chairs may solicit additional reviews based on factors such as reviewer expertise and strong disagreement between reviewers. The program committee as a whole will make final decisions about which submissions to accept for publication.

Double-Anonymous Review Process

In order to ensure the fairness of the reviewing process, the FSE 2026 Research Papers Track will employ a double-anonymous review process, where reviewers do not know the identity of authors, and authors do not know the identity of reviewers. The papers submitted must not reveal the authors’ identities in any way:

  • Authors should leave out author names and affiliations from the body of their submission.

  • Authors should ensure that any citation to related work by themselves is written in third person, that is, “the prior work of XYZ” as opposed to “our prior work”.

  • Authors should not include URLs to author-revealing sites (tools, datasets). Authors are still encouraged to follow open science principles and submit replication packages, see more details on the open science policy below.

  • Authors should anonymize organization names that might reveal author affiliations, and instead provide the general characteristics of the organizations involved needed to understand the context of the paper.

  • Authors are encouraged to avoid including an acknowledgements section in a paper at submission time.

  • While authors have the right to upload preprints on ArXiV or similar sites, they should avoid specifying that the manuscript was submitted to FSE 2026.

  • During the review period, authors should not publicly use the submission title.

The double-anonymous process used is “heavy”, i.e., the paper anonymity will be maintained during all reviewing and discussion periods. In case of major revision, authors must therefore maintain anonymity in their response letter and must provide no additional information that could be author-revealing.

To facilitate double-anonymous reviewing, we recommend that authors postpone publishing their submitted work on arXiv or similar sites until after the notification. If the authors have uploaded to arXiv or similar, they should avoid specifying that the manuscript was submitted to FSE 2026. Authors with further questions on double-anonymous reviewing are encouraged to contact the program chairs by email. Papers that do not comply with the double-anonymous review process will be desk-rejected.

Submission Policies

The authors must follow the “ACM Policy on Authorship” released April 20, 2023 and its accompanying FAQ including the following points:

  • “Generative AI tools and technologies, such as ChatGPT, may not be listed as authors of an ACM published Work. The use of generative AI tools and technologies to create content is permitted but must be fully disclosed in the Work. For example, the authors could include the following statement in the Acknowledgements section of the Work: ChatGPT was utilized to generate sections of this Work, including text, tables, graphs, code, data, citations, etc.). If you are uncertain about the need to disclose the use of a particular tool, err on the side of caution, and include a disclosure in the acknowledgements section of the Work.”

  • “If you are using generative AI software tools to edit and improve the quality of your existing text in much the same way you would use a typing assistant like Grammarly to improve spelling, grammar, punctuation, clarity, engagement or to use a basic word processing system to correct spelling or grammar, it is not necessary to disclose such usage of these tools in your Work.”

Please read the full policy and FAQ.

Papers submitted for consideration to FSE should not have been already published elsewhere and should not be under review or submitted for review elsewhere during the reviewing period. Specifically, authors are required to adhere to the ACM Policy and Procedures on Plagiarism and the ACM Policy on Prior Publication and Simultaneous Submissions.

To prevent double submissions, the chairs might compare the submissions with related conferences that have overlapping review periods. The double submission restriction applies only to refereed journals and conferences, not to unrefereed forums (e.g. arXiv.org). To check for plagiarism issues, the chairs might use external plagiarism detection software.

By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including ACM’s new Publications Policy on Research Involving Human Participants and Subjects.

Alleged violations to any of the above policies will be reported to ACM for further investigation and may result in a full retraction of your paper, in addition to other potential penalties, as per the ACM Publications Policies.

Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process if your paper is accepted. ACM has been involved in ORCID from the start and they have recently made a commitment to collect ORCID IDs from all published authors. ACM is committed to improve author discoverability, ensure proper attribution and contribute to ongoing community efforts around name normalization; your ORCID ID will help in these efforts.

The authors of accepted papers are invited and strongly encouraged to attend the conference to present their work. Attendance at the event is not mandatory for publication. Authors also have the option of not presenting their work at the conference, in which case they do not need to register.

Important Dates

All dates are 23:59:59 AoE (UTC-12h)

  • Paper registration: September 4, 2025 (to register a paper, paper title, abstract, author list, and some additional metadata are required; title and abstract must contain sufficient information for effective bidding; registrations containing empty or generic title and abstract may be dropped)

  • Full paper submission: September 11, 2025

  • Author response: November 21-25, 2025

  • Initial notification: December 22, 2025

  • Revised manuscript submissions (major revisions only): February 24, 2026

  • Final notification for major revisions: March 24, 2026

  • Camera ready: April 23, 2026

The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work. Please also note that the names and list of authors as well as the title in the camera-ready version cannot be modified from the ones in the submitted version unless there is explicit approval from the track chairs.

Open Science Policy

The research track of FSE has introduced an open science policy. Openness in science is key to fostering scientific progress via transparency, reproducibility, and replicability. The steering principle is that all research results should be accessible to the public, if possible, and that empirical studies should be reproducible. In particular, we actively support the adoption of open data and open source principles and encourage all contributing authors to disclose (anonymized and curated) data to increase reproducibility and replicability. Upon submission to the research track, authors are asked to make a replication package available to the program committee (via upload of supplemental material or a link to a private or public repository) or to comment on why this is not possible or desirable. Furthermore, authors are asked to indicate whether they intend to make their data publicly available upon acceptance. We ask authors to provide a supporting statement on the availability of a replication package (or lack thereof) in their submitted papers in a section named Data Availability after the Conclusion section. This statement will not count towards the page limit for the submission. Be careful that such statements continue to maintain author anonymity. For more details, see the FSE open science policy.

Authors of accepted papers will be given an opportunity (and encouragement) to submit their data and tools to the separate FSE 2026 artifact evaluation committee.

Topics of Interest

Topics of interest include, but are not limited to:

  • Artificial intelligence and machine learning for software engineering
  • Autonomic computing
  • Debugging and fault localization
  • Dependability, safety, and reliability
  • Distributed and collaborative software engineering
  • Embedded software, safety-critical systems, and cyber-physical systems
  • Empirical software engineering
  • Human and social aspects of software engineering
  • Human-computer interaction
  • Mining software repositories
  • Mobile development
  • Model checking
  • Model-driven engineering
  • Parallel, distributed, and concurrent systems
  • Performance engineering
  • Program analysis
  • Program comprehension
  • Program repair
  • Program synthesis
  • Programming languages
  • Recommendation systems
  • Requirements engineering
  • Search-based software engineering
  • Services, components, and cloud
  • Software architectures
  • Software engineering education
  • Software engineering for machine learning and artificial intelligence
  • Software evolution
  • Software processes
  • Software security
  • Software testing
  • Software traceability
  • Symbolic execution
  • Tools and environments

Important update on ACMs new open access publishing model for 2026 ACM Conferences!

Starting January 1, 2026, ACM will fully transition to Open Access. All ACM publications, including those from ACM-sponsored conferences, will be 100% Open Access. Authors will have two primary options for publishing Open Access articles with ACM: the ACM Open institutional model or by paying Article Processing Charges (APCs). With over 1,800 institutions already part of ACM Open, the majority of ACM-sponsored conference papers will not require APCs from authors or conferences (currently, around 70-75%).

Authors from institutions not participating in ACM Open will need to pay an APC to publish their papers, unless they qualify for a financial or discretionary waiver. To find out whether an APC applies to your article, please consult the list of participating institutions in ACM Open and review the APC Waivers and Discounts Policy. Keep in mind that waivers are rare and are granted based on specific criteria set by ACM.

Understanding that this change could present financial challenges, ACM has approved a temporary subsidy for 2026 to ease the transition and allow more time for institutions to join ACM Open. The subsidy will offer:

  • $250 APC for ACM/SIG members
  • $350 for non-members

This represents a 65% discount, funded directly by ACM. Authors are encouraged to help advocate for their institutions to join ACM Open during this transition period.

This temporary subsidized pricing will apply to all conferences scheduled for 2026.

This page serves as the FAQ on Review Process: Major Revisions, Open Science Policy, Double-Anonymous Reviewing, PACMSE Proceedings.

Q: What paper format shall we follow for FSE 2026?

A: Papers accepted by the technical track of FSE 2026 will be published in the inaugural journal issue of the Proceedings of the ACM on Software Engineering (PACMSE). Approval has been granted by ACM in late July. Please check the Research Paper How to Submit section for details.

Q: How would the inaugural PACMSE journal affect FSE 2026?

A: FSE will be published in the inaugural PACMSE journal following the recent practices of other communities such as PACMPL (PLDI, POPL, OOPSLA, etc.), PACMHCI, PACMMOD, PACMNET, etc.

Identity: FSE papers will be published in a dedicated issue of PACMSE, with FSE as the issue name. This means that FSE papers will keep their identity!

Paper format: The paper format will follow the ACM’s requirement. This is a switch from the traditional FSE two-column format to this new PACMSE single-column format. However, the amount of content should remain more or less the same: the FSE 2026’s 18 page limit in the singe-column format maps roughly to the old single-column of 10 pages.

Review process: FSE already has a major-revision cycle in 2023, 2024 and 2025, which maps neatly onto PACMSE’s requirements for two rounds of reviews, so there are no PACMSE-related changes here.

Conference presentations: FSE 2026’s move to PACMSE changes how the proceedings are published. All accepted papers will still be guaranteed presentation delivery at the conference in the usual way.

Policy on Authorship (e.g., regarding ChatGPT)

Q: What is the policy on Authorship, especially considering the use of Generative AI tools and technologies, such as ChatGPT?

A: Submissions must follow the “ACM Policy on Authorship” released April 20, 2023, which contains policy regarding the use of Generative AI tools and technologies, such as ChatGPT. Please also check the ACM FAQ which describes in what situations generative AI tools can be used (with or without acknowledgment).

Major Revision Process

Q: Why is FSE allowing major revisions?

A: SE conferences are currently forced to reject papers that include valuable material, but would need major changes to become acceptable for conference presentation, because major revisions cannot be accommodated in the current review process. By supporting only a binary outcome, conferences force reviewers to decide between rejection and acceptance even in borderline cases that would be better judged after a round of major revision. This can cause additional reviewing burden for the community (the paper is resubmitted to another venue with new reviewers) and inconsistency for the authors (the new reviewers have different opinions). We hope by allowing major revisions to both increase the acceptance rate of FSE and to help reduce these current problems with the reviewing process.

For Authors

Q: If my paper receives major revisions, what happens next?

A: The meta-review will clearly and explicitly list all major changes required by the reviewers to make the paper acceptable for publication. Authors of these papers are granted 6 weeks to implement the requested changes. In addition to the revised paper, authors are asked to submit a response letter that explains how each required change was implemented. If any change was not implemented, authors can explain why. The same reviewers will then review the revised paper and make their final (binary) decision. Authors can also choose to withdraw their submission if they wish.

Q: Will major revision become the default decision causing initial acceptance rates to drop?

A: This is not the intention: reviewers are instructed to accept all papers that would have been accepted when major revision was not an available outcome.

For Reviewers

Q: When shall I recommend major revision for a paper?

A: Major revision should not become the default choice for borderline papers and should be used only if without major revisions the paper would be rejected, while a properly done major revision, which addresses the reviewers’ concerns, could make the paper acceptable for publication; if the requested changes are doable in 6 weeks and are implementable within the page limit; if the requested changes are strictly necessary for paper acceptance (i.e., not just nice-to-have features); if the requested changes require recheck (i.e., reviewers cannot trust the authors to implement them directly in the camera ready).

Q: When shall I recommend rejection instead of major revision?

A: Rejection is a more appropriate outcome than major revision if the requested additions/changes are not implementable in 6 weeks; if the contribution is very narrow or not relevant to the SE audience, and it cannot be retargeted in 6 weeks; if the methodology is flawed and cannot be fixed in 6 weeks; if results are unconvincing, the paper does not seem to improve the state of the art much, and new convincing results are unlikely to be available after 6 weeks of further experiments; if the customary benchmark used in the community was ignored and cannot be adopted and compared to in 6 weeks.

Q: When shall I recommend acceptance instead of major revision?

A: We do not want major revision to become the primary pathway for acceptance. We should continue to trust the authors to make minor changes to the submissions in the camera ready version. Acceptance is preferable if the requested additions/changes are nice to have features, not mandatory for the acceptability of the work; if minor improvements of the text are needed; if minor clarifications requested by the reviewers should be incorporated; if important but not critical references should be added and discussed; if discussion of results could be improved, but the current one is already sufficient.

Q: What is the difference between major revision and shepherding?

A: Major revision is not shepherding. While shepherding typically focuses on important but minor changes, which can be specified in an operational way and can be checked quite easily and quickly by reviewers, major revisions require major changes (although doable in 6 weeks), which means the instructions for the authors cannot be completely operational and the check will need to go deeply into the new content delivered by the paper. Hence, while the expectation for shepherded papers is that most of them will be accepted once the requested changes are implemented, this is not necessarily the case with major revisions.

Q: Is there a quota of papers that can have major revision as outcome? A: As there is no quota for the accepted papers, there is also no quota for major revisions. However, we expect that thanks to major revisions we will be able to eventually accept 10-15% more papers, while keeping the quality bar absolutely unchanged.

Q: What shall I write in the meta-review of a paper with major revision outcome?

A: With the possibility of a major revision outcome, meta-reviews become extremely important. The meta-review should clearly and explicitly list all major changes required by the reviewers to make the paper acceptable for publication. The meta-review should act as a contract between reviewers and authors, such that when all required changes are properly made, the paper is accepted. In this respect, the listed changes should be extremely clear, precise, and implementable.

Review Process

For Authors

Q: Can I withdraw my paper?

A: Yes, papers can be withdrawn at any time using HotCRP.

Q: Is appendix or other supplemental materials allowed?

A: The main submission file must follow the page limit. Any supplemental materials including appendix and replication packages must be submitted separately under “Supplemental Material”. Program Committee members can review supplemental materials but are not obligated to review them.

For Reviewers

Q: The authors have provided a URL to supplemental material. I would like to see the material but I worry they will snoop my IP address and learn my identity. What should I do?

A: Contact the Program Co-Chairs, who will download the material on your behalf and make it available to you.

Q: If I am assigned a paper for which I feel I am not an expert, how do I seek an outside review?

A: PC members should do their own reviews, not delegate them to someone else. Please contact the Program Co-Chairs, especially since additional reviewers might have a different set of conflicts of interest.

Open Science Policy

Q: What is the FSE 2026 open science policy and how can I follow it?

A: Openness in science is key to fostering scientific progress via transparency, reproducibility, and replicability. Upon submission to the research track, authors are asked to:

  • make their data available to the program committee (via upload of supplemental material or a link to an anonymous repository) and provide instructions on how to access this data in the paper; or
  • include in the paper an explanation as to why this is not possible or desirable; and
  • indicate if they intend to make their data publicly available upon acceptance. This information should be provided in the submitted papers in a section named Data Availability after the Conclusion section. For more details, see the FSE open science policy. Q: How can I upload supplementary material via the HotCRP site and make it anonymous for double-anonymous review?

A: To conform to the double-anonymous policy, please include an anonymized URL. Code and data repositories may be exported to remove version control history, scrubbed of names in comments and metadata, and anonymously uploaded to a sharing site. Instructions are provided in the FSE open science policy.

Double-Anonymous Reviewing (DAR)

Q: Why are you using double-anonymous reviewing?

A: Studies have shown that a reviewer’s attitude toward a submission may be affected, even unconsciously, by the identity of the authors.

Q: Do you really think DAR actually works? I suspect reviewers can often guess who the authors are anyway.

A: It is rare for authorship to be guessed correctly, even by expert reviewers, as detailed in this study.

For Authors

Q: What exactly do I have to do to anonymize my paper?

A: Your job is not to make your identity undiscoverable but simply to make it possible for reviewers to evaluate your submission without having to know who you are: omit authors’ names from your title page, and when you cite your own work, refer to it in the third person. Also, be sure not to include any acknowledgements that would give away your identity. You should also avoid revealing the institutional affiliation of authors.

Q: I would like to provide supplementary material for consideration, e.g., the code of my implementation or proofs of theorems. How do I do this?

A: On the submission site, there will be an option to submit supplementary material along with your main paper. You can also share supplementary material in a private or publicly shared repository (preferred). This supplementary material should also be anonymized; it may be viewed by reviewers during the review period, so it should adhere to the same double-anonymous guidelines. See instructions on the FSE open science policy.

Q: My submission is based on code available in a public repository. How do I deal with this?

A: Making your code publicly available is not incompatible with double-anonymous reviewing. You can create an anonymized version of the repository and include a new URL that points to the anonymized version of the repository, similar to how you would include supplementary materials to adhere to the Open Science policy. Authors wanting to share GitHub repositories may want to look into using https://anonymous.4open.science/ which is an open source tool that helps you to quickly double-anonymize your repository.

Q: I am building on my own past work on the WizWoz system. Do I need to rename this system in my paper for purposes of anonymity, so as to remove the implied connection between my authorship of past work on this system and my present submission? A: Maybe. The core question is really whether the system is one that, once identified, automatically identifies the author(s) and/or the institution. If the system is widely available, and especially if it has a substantial body of contributors and has been out for a while, then these conditions may not hold (e.g., LLVM or HotSpot), because there would be considerable doubt about authorship. By contrast, a paper on a modification to a proprietary system (e.g., Visual C++, or a research project that has not open-sourced its code) implicitly reveals the identity of the authors or their institution. If naming your system essentially reveals your identity (or institution), then anonymize it. In your submission, point out that the system name has been anonymized. If you have any doubts, please contact the Program Co-Chairs.

Q: I am submitting a paper that extends my own work that previously appeared at a workshop. Should I anonymize any reference to that prior work?

A: No. But we recommend you do not use the same title for your FSE submission, so that it is clearly distinguished from the prior paper. In general, there is rarely a good reason to anonymize a citation. When in doubt, contact the Program Co-Chairs.

Q: Am I allowed to post my (non-anonymized) paper on my web page or arXiv?

A: You can discuss and present your work that is under submission at small meetings (e.g., job talks, visits to research labs, a Dagstuhl or Shonan meeting), but you should avoid broadly advertising it in a way that reaches the reviewers even if they are not searching for it. Whenever possible, please avoid posting your manuscript on public archives (e.g, ArXiV) before or during the submission period. Would you still prefer to do so, carefully avoid adding to the manuscript any reference to FSE 2026 (e.g., using footnotes saying “Submitted to FSE 2026”).

Q: Can I give a talk about my work while it is under review? How do I handle social media?

A: We have developed guidelines, described here, to help everyone navigate in the same way the tension between the normal communication of scientific results, which double-anonymous reviewing should not impede, and actions that essentially force potential reviewers to learn the identity of the authors for a submission. Roughly speaking, you may (of course!) discuss work under submission, but you should not broadly advertise your work through media that is likely to reach your reviewers. We acknowledge there are grey areas and trade-offs; we cannot describe every possible scenario.

Things you may do:

  • Put your submission on your home page.
  • Discuss your work with anyone who is not on the review committees, or with people on the committees with whom you already have a conflict.
  • Present your work at professional meetings, job interviews, etc.
  • Submit work previously discussed at an informal workshop, previously posted on arXiv or a similar site, previously submitted to a conference not using double-anonymous reviewing, etc.

Things you should not do:

  • Contact members of the review committees about your work, or deliberately present your work where you expect them to be.
  • Publicize your work on major mailing lists used by the community (because potential reviewers likely read these lists).
  • Publicize your work on social media if wide public [re-]propagation is common (e.g., Twitter) and therefore likely to reach potential reviewers. For example, on Facebook, a post with a broad privacy setting (public or all friends) saying, “Whew, FSE paper in, time to sleep” is okay, but one describing the work or giving its title is not appropriate. Alternatively, a post to a group including only the colleagues at your institution is fine.

Reviewers will not be asked to recuse themselves from reviewing your paper unless they feel you have gone out of your way to advertise your authorship information to them. If you are unsure about what constitutes “going out of your way”, please contact the Program Co-Chairs.

Q: Will the fact that FSE is double-anonymous have an impact on handling conflicts of interest?

A: Double-anonymous reviewing does not change the principle that reviewers should not review papers with which they have a conflict of interest, even if they do not immediately know who the authors are. Authors declare conflicts of interest when submitting their papers using the guidelines in the Call for Papers. Papers will not be assigned to reviewers who have a conflict. Note that you should not declare gratuitous conflicts of interest and the chairs will compare the conflicts declared by the authors with those declared by the reviewers. Papers abusing the system will be desk-rejected.

For Reviewers

Q: What should I do if I learn the authors’ identity? What should I do if a prospective FSE author contacts me and asks to visit my institution? A: If you feel that the authors’ actions are largely aimed at ensuring that potential reviewers know their identity, contact the Program Co-Chairs. Otherwise, you should not treat double-anonymous reviewing differently from other reviewing. In particular, refrain from seeking out information on the authors’ identity, but if you discover it accidentally this will not automatically disqualify you as a reviewer. Use your best judgement.

Q: How do we handle potential conflicts of interest since I cannot see the author names?

A: The conference review system will ask that you identify conflicts of interest when you get an account on the submission system.

Q: How should I avoid learning the authors’ identity, if I am using web-search in the process of performing my review?

A: You should make a good-faith effort not to find the authors’ identity during the review period, but if you inadvertently do so, this does not disqualify you from reviewing the paper. As part of the good-faith effort, please turn off Google Scholar auto-notifications. Please do not use search engines with terms like the paper’s title or the name of a new system being discussed. If you need to search for related work you believe exists, do so after completing a preliminary review of the paper.

The above guidelines are partly based on the PLDI FAQ on double-anonymous reviewing and the ICSE 2023 guidelines on double-anonymous submissions.