ICSE 2026
Sun 12 - Sat 18 April 2026 Rio de Janeiro, Brazil

Accepted Papers

Title
A Causal Perspective on Measuring, Explaining and Mitigating Smells in LLM-Generated Code
Research Track
Accurate Inference of Termination Conditions
Research Track
A Comparison of Conversational Models and Humans in Answering Technical Questions: the Firefox Case
Research Track
A Comprehensive Study of Deep Learning Model Fixing Approaches
Research Track
Actionable Warning Is Not Enough: Recommending Valid Actionable Warnings with Weak Supervision
Research Track
AdapTrack: Constrained Decoding without Distorting LLM's Output Intent
Research Track
A First Look at Model Supply Chain: From the Risk Perspective
Research Track
Agentic Predicates Reasoning for Directed Fuzzing
Research Track
AgentSpec: Customizable Runtime Enforcement for Safe and Reliable LLM Agents
Research Track
Pre-print
A Large-Scale Empirical Study of Secret Key Leakage in Hugging Face Spaces
Research Track
Aligning Requirement for Large Language Model's Code Generation
Research Track
Pre-print
An Empirical Study of WebAssembly Usage in Node.js
Research Track
An Empirical Study on Static Application Security Testing (SAST) Tools for Python
Research Track
An Empirical Study on the Robustness of Android Third-Party Library Detection Tools Against Advanced Obfuscation
Research Track
An Eye for AI: Eye-Tracking the Micro-Interruptions of GenAI Code Suggestions
Research Track
An LLM Agentic Approach for Legal-Critical Software: A Case Study for Tax Prep Software
Research Track
Pre-print
Are Humans and LLMs Confused by the Same Code? An Empirical Study on Fixation-Related Potentials and LLM Perplexity
Research Track
Are “Solved Issues” in SWE-bench Really Solved Correctly? An Empirical Study
Research Track
Argus: A Multi-agent Sensitive Information Leakage Detection Framework Based on Hierarchical Reference Relationships
Research Track
A Semantic-based Optimization Approach for Repairing LLMs: Case Study on Code Generation
Research Track
AssertFlip: Reproducing Bugs via Inversion of LLM-Generated Passing Tests
Research Track
Assessing Coherency and Consistency of Code Execution Reasoning by Large Language Models
Research Track
AtPatch: Debugging Transformers via Hot-Fixing Over-Attention
Research Track
Attention Pruning: Automated Fairness Repair of Language Models via Surrogate Simulated Annealing
Research Track
Automated Network-Level Fault Injection Testing of Microservice Architectures
Research Track
Automatic Dockerfile Generation with Large Language Models
Research Track
Automating API Documentation from Crowdsourced Knowledge
Research Track
Automating Just-In-Time Python Type Annotation Updating
Research Track
Back to the Basics: Rethinking Issue-Commit Linking with LLM-Assisted Retrieval
Research Track
Bayesian Multi-Level Performance Models for Multi-Factor Variability of Configurable Software Systems
Research Track
Beyond Adoption: Examining the Evolution and Impact of Codes of Conduct on Open-Source Communities
Research Track
Beyond Final Code: A Process-Oriented Error Analysis of Software Development Agents in Real-World GitHub Scenarios
Research Track
Pre-print
BFix: Automated Safe Memory-Leak Fixing for Binary Code
Research Track
Bounded Exhaustive Random Program Generation for Testing Solidity Compilers
Research Track
Breaking Single-Tester Limits: Multi-Agent LLMs for Multi-User Feature Testing
Research Track
Breaking Strong Encapsulation: A Comprehensive Study of Java Module Abuse
Research Track
BTreeFuzz: Enhanced Feedback Mechanism for ROS Program Fuzzer Based on Behavior Tree
Research Track
Bytecode-centric Detection of Known-to-be-vulnerable Dependencies in Java Projects
Research Track
Pre-print
CCLInsight: Unveiling Insights in GPU Collective Communication Libraries via Primitive-Centric Analysis
Research Track
Characterizing Regression Bug‑Inducing Changes and Improving LLM‑Based Regression Bug Detection
Research Track
Closing the Chain: How to reduce your risk of being SolarWinds, Log4j, or XZ Utils
Research Track
Cobblestone: A Divide-and-Conquer Approach for Automating Formal Verification
Research Track
CoBrA: Context-, Branch-sensitive Static Analysis for Detecting Taint-style Vulnerabilities in PHP Web Applications
Research Track
CodeMapper: A Language-Agnostic Approach to Mapping Code Regions Across Commits
Research Track
Cognitive Biases in LLM-Assisted Software Development
Research Track
CombCT: Compiler Testing via Combinatorial Testing
Research Track
Configuration-Sensitive Linux Kernel Fuzzing
Research Track
ConfLogger: Enhance Systems' Configuration Diagnosability through Configuration Logging
Research Track
ConfuGuard: Using Metadata to Detect Active and Stealthy Package Confusion Attacks Accurately and at Scale
Research Track
Pre-print
Connected to Stay: Gender Homophily and Its Role in Open-Source Software Developer Retention
Research Track
Context-Free Grammar Inference for Complex Programming Languages in Black Box Settings
Research Track
Context-Free Property Oriented Fuzzing
Research Track
CoReX: Context-Aware Refinement-Based Slicing for Debugging Regression Failures
Research Track
Pre-print
CREME: Robustness Enhancement of Code LLMs via Layer-Aware Model Editing
Research Track
D-BUNDLR: Destructing JavaScript Bundles for Effective Static Analysis
Research Track
Debugging Performance Issues in WebAssembly Runtimes via Mutation-based Inference
Research Track
Decades of GNU Patch and Git Cherry-Pick: Can We Do Better?
Research Track
DeFT: Maintaining Determinism and Extracting Unit Tests for Autonomous Driving Planning
Research Track
Demystifying the CVE Ecosystem: Community-Perceived Impacts and Problems
Research Track
Dependency-aware Residual Risk Analysis
Research Track
Designing Abandabot: When Does Open Source Dependency Abandonment Matter?
Research Track
Diffploit: Facilitating Cross-Version Exploit Migration for Open Source Library Vulnerabilities
Research Track
DNN Modularization via Activation-Driven Training
Research Track
Do Unit Proofs Work? An Empirical Study of Compositional Bounded Model Checking for Memory Safety Verification
Research Track
Dually Hierarchical Drift Adaptation for Online Configuration Performance Learning
Research Track
Pre-print
EchoFuzz: Empowering Smart Contract Fuzzing with Large Language Models
Research Track
Efficient Build Dependency Verification Using eBPF and Incremental Analysis
Research Track
Efficient Strong Updates For Path Sensitive Data Dependence Analysis
Research Track
EMC: A Semantic-Enhanced Malware Classification Method with Robustness and Scalability
Research Track
Energy-Efficient Software Development: A Multi-dimensional Empirical Analysis of Stack Overflow
Research Track
Enforcing Control Flow Integrity on DeFi Smart Contracts
Research Track
Enhancing Issue Localization Agent with Tool-Interactive Training
Research Track
Pre-print
Enhancing LLM Code Generation with Ensembles: A Similarity-Based Selection Approach
Research Track
Enhancing Symbolic Execution with Self-Configuring Parameters
Research Track
E-Test: E'er-Improving Test Suites
Research Track
Pre-print
Evaluating and Improving Automated Repository-Level Rust Issue Resolution with LLM-based Agents
Research Track
Evaluating Generated Commit Messages with Large Language Models
Research Track
Evaluating the effectiveness of LLM-based interoperability
Research Track
Evolving Trends, Patterns, and Hidden Pitfalls: Unveiling JavaScript Feature Usage in the Wild
Research Track
Exploring and Improving Real-World Vulnerability Data Generation via Prompting Large Language Models
Research Track
FlowScope: Non-Intrusive Distributed Tracing with Method-Level Delay Estimation for Microservices Troubleshooting
Research Track
FM4MC: Improving Feature Models for Microservice Chains—Towards More Efficient Configuration and Validation
Research Track
FORGE: An LLM-driven Framework for Large-Scale Smart Contract Vulnerability Dataset Construction
Research Track
FreshBrew: A Benchmark for Evaluating AI Agents on Java Code Migration
Research Track
From Code Changes to Quality Gains: An Empirical Study in Python ML Systems with PyQu
Research Track
Pre-print
From Code to Correctness: Closing the Last Mile of Code Generation with Hierarchical Debugging
Research Track
From Seed to Scope: Reasoning to Identify Change Impact Sets
Research Track
Fuzzing Java Optimizing Compilers with Complex Inter-Class Structures Guided by Heterogeneous Program Graphs
Research Track
Fuzzing JavaScript Engines by Fusing JavaScript and WebAssembly
Research Track
"Game Changer" or "Overenthusiastic Drunk Acquaintance"? Generative AI Use by Blind and Low Vision Software Professionals in the Workplace
Research Track
Generating Energy-Efficient Code via Large-Language Models - Where are we now?
Research Track
Pre-print Media Attached
Generator Solving for Symbolic Execution
Research Track
GPTrace: Effective Crash Deduplication Using LLM Embeddings
Research Track
Hallucinating Certificates: Differential Testing of TLS Certificate Validation Using Generative Language Models
Research Track
HarnessLLM: Rust Verification Harness Generation with Large Language Models
Research Track
Hey, ChatGPT, Look at My Work: Using Conversational AI in Requirements Engineering Education
Research Track
HistoryFinder: Advancing Method-Level Source Code History Generation with Accurate Oracles and Enhanced Algorithm
Research Track
HoarePrompt: Structural Reasoning About Program Correctness in Natural Language
Research Track
How Does Core Contributor Disengagement Impact Open Source Project Activity? A Quasi-Experiment
Research Track
How Good are Input Grammar Miners? An Empirical Study
Research Track
“I need to learn better searching tactics for privacy policy laws.” Investigating Software Developers’ Behavior When Using Sources on Privacy Issues
Research Track
InferLog: Accelerating LLM Inference for Online Log Parsing via ICL-oriented Prefix Caching
Research Track
IntelliRadar: A Comprehensive Platform to Pinpoint Malicious Package Information from Cyber Intelligence
Research Track
Pre-print
INTENTFIX: Automated Logic Vulnerability Repair via LLM-Driven Intent Modeling
Research Track
Is Call Graph Pruning Really Effective? An Empirical Re-evaluation
Research Track
Is My RPC Response Reliable? Detecting RPC Bugs in Blockchain Client under Context
Research Track
Issue2Test: Generating Reproducing Test Cases from Issue Reports
Research Track
JEDI: Java Evaluation of Declarative and Imperative Queries - Benchmarking the Java Stream API
Research Track
Knowledge-Augmented Log Anomaly Detection with Large Language Models
Research Track
Large Language Model-Aided Partial Program Dependence Analysis
Research Track
Learning From Software Failures: A Case Study at a National Space Research Center
Research Track
Let the Trial Begin: A Mock-Court Approach to Vulnerability Detection using LLM-Based Agents
Research Track
Light over Heavy: Automated Performance Requirements Quantification with Linguistic Inducement
Research Track
Pre-print
LLM-based Agents for Automated Bug Fixing: How Far Are We?
Research Track
LLM-based API Argument Completion with Knowledge-Augmented Prompts
Research Track
LLM-based Vulnerability Discovery through the Lens of Code Metrics
Research Track
LLM Test Generation via Iterative Hybrid Program Analysis
Research Track
LoopSCC: Summarizing Complex Multi-branch Nested Loops via Periodic Oscillation Interval
Research Track
LSPRAG: LSP-Guided RAG for Language-Agnostic Real-Time Unit Test Generation
Research Track
MaCTG: Multi-Agent Collaborative Thought Graph for Automatic Programming
Research Track
"Making Our Life Less Monotonous" or "Just Tick Things Off": An Exploratory Multi-Method Study of Toil
Research Track
"Maybe We Need Some More Examples:" Individual and Team Drivers of Developer GenAI Tool Use
Research Track
MazeBreaker: Multi-Agent Reinforcement Learning for Dynamic Jailbreaking of LLM Security Defenses
Research Track
Measuring the Influence of Incorrect Code on Test Generation
Research Track
Memory-Efficient Large Language Models for Program Repair with Semantic-Guided Patch Generation
Research Track
Metronome: Differentiated Delay Scheduling for Serverless Functions
Research Track
MINES: Explainable Anomaly Detection through Web API Invariant Inference
Research Track
Minimizing Breaking Changes and Redundancy in Mitigating Technical Lag for Java Projects
Research Track
MioHint: LLM-Assisted Request Mutation for Whitebox REST API Testing
Research Track
ModularEvo: Evolving Multi-Task Models via Neural Network Modularization and Composition
Research Track
More with Less: An Empirical Study of Turn-Control Strategies for Efficient Coding Agents
Research Track
NB2P: Generating Data Science Pipelines from Computational Notebooks
Research Track
No Shot in the Dark: Efficient Context-Free Language Reachability via Context-Aware Tabulation
Research Track
One Signature, Multiple Payments: Demystifying and Detecting Signature Replay Vulnerabilities in Smart Contracts
Research Track
One Size Does Fit All: Kernel-Assisted Fine-Grained Debloating and Layout Randomization for Shared Libraries
Research Track
On Interaction Effects in Greybox Fuzzing
Research Track
Pre-print
Online and Interactive Bayesian Inference Debugging
Research Track
On the Robustness of Fairness Practices: A Causal Framework for Systematic Evaluation
Research Track
Optimization-Aware Test Generation for Deep Learning Compilers
Research Track
Order Matters! An Empirical Study on Large Language Models' Input Order Bias in Software Fault Localization
Research Track
Parse this! Summoning Context-Sensitive Inputs with Goblin
Research Track
Perspective-Taking in Software Engineering: A Study on its Relationship to Team Performance
Research Track
Portable Power Modeling with Transfer Learning on JVM-Based Applications
Research Track
Precise Static Identification of Ethereum Storage Variables
Research Track
PredicateFix: Repairing Static Analysis Alerts with Bridging Predicates
Research Track
Pre-print
Predicting Failures in Smart Human-Centric EcoSystems
Research Track
PreServe: Intelligent Management for LMaaS Systems via Hierarchical Prediction
Research Track
Project-Level Resource Leak Detection through Agent-based Ownership Analysis and Repair Pattern Verification
Research Track
PromiseTune: Unveiling Causally Promising and Explainable Configuration Tuning
Research Track
Pre-print
ProxyWar: Dynamic Assessment of LLM Code Generation in Game Arenas
Research Track
PTV: Scalable Version Detection of Web Libraries and its Security Application
Research Track
PyXray: Practical Cross-Language Call Graph Construction through Object Layout Analysis
Research Track
Pre-print
Quantifying Memorization Advantage in Code LLMs
Research Track
RealityCraft: Automated Synthesis of Extended Reality Device Interaction Scripts from Natural Language Instructions
Research Track
RefAgent: A Multi-agent LLM-based Framework for Automatic Software Refactoring
Research Track
Reflections on the Reproducibility of Commercial LLM Performance in Empirical Software Engineering Studies
Research Track
Remediating Superfluous Re-Rendering in React Applications
Research Track
Repairing LLM Executions for Secure Automatic Programming
Research Track
Repair Ingredients Are All You Need: Improving Large Language Model-Based Program Repair via Repair Ingredients Search
Research Track
RepoScope: Leveraging Call Chain-Aware Multi-View Context for Repository-Level Code Generation
Research Track
Rethinking the Capability of Fine-Tuned Language Models for Automated Vulnerability Repair
Research Track
Rethinking the Evaluation of Secure Code Generation
Research Track
Retrieval-Augmented Test Generation: How Far Are We?
Research Track
Revisiting "Revisiting Neuron Coverage for DNN Testing: A Layer-Wise and Distribution-Aware Criterion": A Critical Review and Implications on DNN Coverage Testing
Research Track
RISE: Rule-Driven SQL Dialect Translation via Query Reduction
Research Track
RulePilot: An LLM-Powered Agent for Security Rule Generation
Research Track
Rusted Types: Static Detection of Rust Type Confusion Bugs
Research Track
RusyFuzz: Unhandled Exception Guided Fuzzing for Rust OS Kernel
Research Track
SAFE: Harnessing LLM for Scenario-Driven ADS Testing from Multimodal Crash Data
Research Track
SAINT: Service-level Integration Test Generation with Program Analysis and LLM-based Agents
Research Track
Same Same But Different: Preventing Refactoring Attacks on Software Plagiarism Detection
Research Track
Sapling: Quantifying and Measuring the Maturity of the RISC-V Software Ecosystem
Research Track
Scaling Security Testing by Addressing the Reachability Gap
Research Track
Scalpel: Automotive Deep Learning Framework Testing via Assembling Model Components
Research Track
Scrub It Out! Erasing Sensitive Memorization in Code Language Models via Machine Unlearning
Research Track
SEAlign: Alignment Training for Software Engineering Agent
Research Track
SecureReviewer: Enhancing Large Language Models for Secure Code Review through Secure-Aware Fine-Tuning
Research Track
SEER: Enhancing Chain-of-Thought Code Generation through Self-Exploring Deep Reasoning
Research Track
Semantic-Enhanced Automatic Refinement of Architecture Recovery Results Using LLMs
Research Track
SeRe: A Security-Related Code Review Dataset Aligned with Real-World Review Activities
Research Track
Six Million (Suspected) Fake Stars on GitHub: A Growing Spiral of Popularity Contests, Spams, and Malware
Research Track
Small Changes, Big Trouble: Demystifying and Parsing License Variants for Incompatibility Detection in the PyPI Ecosystem
Research Track
Pre-print
SmartC2Rust: Iterative, Feedback-Driven C-to-Rust Translation via Large Language Models for Safety and Equivalence
Research Track
Smoke and Mirrors: Jailbreaking LLM-based Code Generation via Implicit Malicious Prompts
Research Track
Pre-print
SpecGuru: Hierarchical LLM-Driven API Points-to Specification Generation with Self-Validation
Research Track
SSAR: A Novel Software Architecture Recovery Approach Enhancing Accuracy and Scalability
Research Track
Staying or Leaving? How Job Satisfaction, Embeddedness and Antecedents Predict Turnover Intentions of Software Professionals
Research Track
STEM-EF: A Model for Assessing Scrum Team Effectiveness Based on Emotional Factors
Research Track
StorFuzz: Using Data Diversity to Overcome Fuzzing Plateaus
Research Track
SustainDiffusion: Optimising the Social and Environmental Sustainability of Stable Diffusion Models
Research Track
Synthesizing Hardware-Specific Instructions for Efficient Code Generation of Simulink
Research Track
Synthetic Repo-level Bug Dataset for Training Automated Program Repair Models
Research Track
TaCoS: Generated Context Summaries for Task Resumption
Research Track
TaintP2X: Detecting Taint-Style Prompt-to-Anything Injection Vulnerabilities in LLM-Integrated Applications
Research Track
Temporal Specification Oriented Fuzzing for Trigger-Action-Programming Smart Home Integrations
Research Track
Test Flimsiness: Characterizing Flakiness Induced by Mutation to the Code Under Test
Research Track
Testora: Using Natural Language Intent to Detect Behavioral Regressions
Research Track
The Cost vs the Benefit of Adding an Extra Code Reviewer to Mitigate Developer Turnover through Reviewer Recommenders
Research Track
The Hidden Cost of Readability: How Code Formatting Silently Consumes Your LLM Budget
Research Track
The Software Infrastructure Attitude Scale (SIAS): A Questionnaire Instrument for Measuring Professionals’ Attitudes Toward Technical and Sociotechnical Infrastructure
Research Track
Think Like Human Developers: Harnessing Community Knowledge for Structured Code Reasoning
Research Track
Think Outside the Box: Automating Inter-App Functionality Testing via Memory Implanting and Reasoning
Research Track
Top General Performance = Top Domain Performance? DomainCodeBench: A Multi-domain Code Generation Benchmark
Research Track
Towards Global Matches for Third-Party Library Detection in Android
Research Track
Towards Scalable and Interpretable Mobile App Risk Analysis via Large Language Models
Research Track
Towards Supporting Open Source Library Maintainers with Community-Based Analytics
Research Track
Towards Understanding and Characterizing Vulnerabilities in Intelligent Connected Vehicles through Real-World Exploits
Research Track
TraceCoder: A Trace-Driven Multi-Agent Framework for Automated Debugging of LLM-Generated Code
Research Track
Training on Clean Data but Getting Backdoored Models! A Poisoning Attack on Code Encoders
Research Track
TypeCare: Boosting Python Type Inference Models via Context-Aware Re-Ranking and Augmentation
Research Track
Understanding DevOps Security of Google Workspace Apps
Research Track
UniCoR: Modality Collaboration for Robust Cross-Language Hybrid Code Retrieval
Research Track
Unified Software Engineering agent as AI Software Engineer
Research Track
Unlocking LLM Repair Capabilities Through Cross-Language Translation and Multi-Agent Refinement
Research Track
Unlocking the Silent Needs: Business-Logic-Driven Iterative Requirements Auto-completion
Research Track
Using a Sledgehammer to Crack a Nut? Revisiting Automated Compiler Fault Isolation
Research Track
Variability-Aware Fuzzing
Research Track
VDBFuzz: Understanding and Detecting Crash Bugs in Vector Database Management Systems
Research Track
Verification of Multi-Model Stochastic Systems
Research Track
Views on Internal and External Validity in Empirical Software Engineering: 10 Years Later and Beyond
Research Track
Well Begun is Half Done: Location-Aware and Trace-Guided Iterative Automated Vulnerability Repair
Research Track
What Makes Code Generation Ethically Sourced?
Research Track
What’s in a Software Engineering Job Posting?
Research Track
What to Retrieve for Effective Retrieval-Augmented Code Generation? An Empirical Study and Beyond
Research Track
When AI Takes the Wheel: Security Analysis of Framework-Constrained Program Generation
Research Track
When Prompts Go Wrong: Evaluating Code Model Robustness to Ambiguous, Contradictory, and Incomplete Task Descriptions
Research Track
WhisperCatcher: Demystifying Unauthorized and Encrypted Private Data Transmission in Android Applications
Research Track
Why Attention Fails: A Taxonomy of Faults in Attention-Based Neural Networks
Research Track
WhyFlow: Interrogative Debugger for Sensemaking Taint Analysis
Research Track
Write Your Own Code Checker: An Automated Test-Driven Checker Development Approach with LLMs
Research Track
XRFix: Exploring Performance Bug Repair of Extended Reality Applications with Large Language Models
Research Track

Call for Papers

The International Conference on Software Engineering (ICSE) is the premier forum for presenting and discussing the most recent and significant technical research contributions in the field of Software Engineering. In the research track, we invite high-quality submissions of technical research papers describing original and unpublished results of software engineering research.

ICSE 2026 will follow a dual deadline structure introduced in 2024. In other words, submissions will occur in two cycles. Please refer to the section on Dual Submission Cycles in the following for the information.


UPDATE 6/30/2025: Please note that for the second submission cycle, the abstract deadline is no longer mandatory. The submission deadline for the complete submissions remains July 18 (unchanged) and no extensions will be given.

UPDATE 6/29/2025: Following the success of the previous year, ICSE will again have a Shadow Program Committee for the second cycle to train the next generation of reviewers through deliberate practice with specific guidance and feedback. This program is open to PhD students, post-docs, new faculty members and industry practitioners working in software engineering research. For more details, see the Call for ICSE 2026 Shadow PC Participation.

UPDATE 6/29/2025: Understanding that the change to ACM Open could present financial challenges, ACM has approved a temporary subsidy for 2026 to ease the transition and allow more time for institutions to join ACM Open. The subsidy will offer: $250 APC for ACM/SIG members and $350 APC for non-members. This represents a 65% discount, funded directly by ACM. Authors are encouraged to help advocate for their institutions to join ACM Open during this transition period. This temporary subsidized pricing applies to all ACM conferences scheduled for 2026. For articles eligible for 50% geographic discounts, the discount will be applied to the applicable subsidized rate.


IMPORTANT #1: Starting 2026, all articles published by ACM will be made Open Access. This is greatly beneficial to the advancement of computer science and leads to increased usage and citation of research.

  • Most authors will be covered by ACM OPEN agreements by that point and will not have to pay Article Processing Charges (APC). Check if your institution participates in ACM OPEN.

  • Authors not covered by ACM OPEN agreements may have to pay APC; however, ACM is offering several automated and discretionary APC Waivers and Discounts.

IMPORTANT #2: Submissions must follow the latest “IEEE Submission and Peer Review Policy” and “ACM Policy on Authorship” (with associated FAQ, which includes a policy regarding the use of generative AI tools and technologies, such as ChatGPT.

Research Areas

ICSE welcomes submissions addressing topics across the full spectrum of Software Engineering, being inclusive of quantitative, qualitative, and mixed-methods research. Topics of interest include the following and are grouped into the following nine research areas. Please note that these topics are by no means exhaustive.

Each submission will need to indicate one of these nine areas as the chosen area. Optionally, the authors can consider adding an additional area. A paper may be moved from the chosen area(s) to another focus area at the discretion of the program chairs. Program chairs will ultimately assign a paper to an area chair, considering the authors’ selection, the paper’s content, and other factors such as (if applicable) possible conflicts of interest.

AI for Software Engineering

  • AI-enabled recommender systems for automated SE (e.g., code generation, program repair, AIOps, software composition analysis, etc.)

  • Human-centered AI for SE (e.g., how software engineers can synergistically work with AI agents)

  • Trustworthy AI for SE (e.g., how to provide guarantees, characterize limits, and prevent misuse of AI for SE)

  • Sustainable AI for SE (e.g., how to reduce energy footprint for greener AI for SE)

  • Collaborative AI for SE (e.g., how AI agents collaborate for automating SE)

  • Automating SE tasks with LLM and other foundation models (e.g., large vision model)

  • Efficacy measurement beyond traditional metrics (e.g., accuracy, BLEU, etc.)

  • Prompt engineering for SE (e.g., novel prompt design)

  • AI-assisted software design and model driven engineering (e.g., specification mining, program synthesis, software architectural design)

Analytics

  • Mining software repositories, including version control systems, issue tracking systems, software ecosystems, configurations, app stores, communication platforms, and novel software engineering data sources, to generate insights through various research methods

  • Software visualization

  • Data-driven user experience understanding and improvement

  • Data driven decision making in software engineering

  • Software metrics (and measurements)

Architecture and Design

  • Architecture and design measurement and assessment

  • Software design methodologies, principles, and strategies

  • Theory building for/of software design

  • Architecture quality attributes, such as security, privacy, performance, reliability

  • Modularity and reusability

  • Design and architecture modeling and analysis

  • Architecture recovery

  • Dependency and complexity analysis

  • Distributed architectures, such as microservice, SOA, cloud computing

  • Patterns and anti-patterns

  • Technical debt in design and architecture

  • Architecture refactoring

  • Adaptive architectures

  • Architecture knowledge management

Dependability and Security

  • Formal methods and model checking (excluding solutions focusing solely on hardware)

  • Reliability, availability, and safety

  • Resilience and antifragility

  • Confidentiality, integrity, privacy, and fairness

  • Performance

  • Design for dependability and security

  • Vulnerability detection to enhance software security

  • Dependability and security for embedded and cyber-physical systems

Evolution

  • Evolution and maintenance

  • API design and evolution

  • Release engineering and DevOps

  • Software reuse

  • Refactoring and program differencing

  • Program comprehension

  • Reverse engineering

  • Environments and software development tools

  • Traceability to understand evolution

Human and Social Aspects

  • Focusing on individuals (from program comprehension, workplace stress to job satisfaction and career progression)

  • Focusing on teams (e.g., collocated, distributed, global, virtual; communication and collaboration within a team), communities (e.g., open source, communities of practice) and companies (organization, economics)

  • Focusing on society (e.g., sustainability; diversity and inclusion)

  • Focusing on programming languages, environments, and tools supporting individuals, teams, communities, and companies.

  • Focusing on software development processes

Requirements and Modeling

  • Requirements engineering (incl. non-functional requirements)

  • Theoretical requirement foundations

  • Requirements and architecture

  • Feedback, user and requirements management

  • Requirements traceability and dependencies

  • Modeling and model-driven engineering

  • Variability and product lines

  • Systems and software traceability

  • Modeling languages, techniques, and tools

  • Empirical studies on the application of model-based engineering

  • Model-based monitoring and analysis

Software Engineering for AI

  • SE for AI models

  • SE for systems with AI components

  • SE for AI code, libraries, and datasets

  • Engineering autonomic systems and self-healing systems

  • Automated repair of AI models

  • Testing and verification of AI-based systems

  • Validation and user-based evaluation of AI-based systems

  • Requirements engineering for AI-based systems

Testing and Analysis

  • Software testing

  • Automated test generation techniques such as fuzzing, search-based approaches, and symbolic execution

  • Testing and analysis of non-functional properties

  • GUI testing

  • Mobile application testing

  • Program analysis

  • Program synthesis (e.g., constraint based techniques)

  • Program repair

  • Debugging and fault localization

  • Runtime analysis and/or error recovery

Scope

Since the authors will choose an area for their submission, the scope of each area becomes important. Some submissions may relate to multiple areas. In such cases, the authors should choose the area for which their paper brings the maximum new insights. Moreover, authors also have the choice of indicating an alternate area for each paper.

Similarly, for certain papers. authors may have a question whether it belongs to any area, or is simply out of scope. For such cases, we recommend the authors to judge whether their paper brings new insights for software engineering. As an example, a formal methods paper with a focus on hardware verification may be deemed out of scope for ICSE. In general, papers that only peripherally concern software engineering and do not give new insights from the software engineering perspective would be less relevant to ICSE. Our goal is, however, to be descriptive, rather than prescriptive, to enable authors to make their own decisions about relevance.

Dual Submission Cycles

ICSE 2026 will have two submission cycles as follows:

First submission cycle

  • (Mandatory) Abstract: March 7, 2025

  • Submission: March 14, 2025

  • Author response period (3 days): May 27-29, 2025

  • Notification: June 20, 2025

  • Revision due: July 18, 2025

  • Camera-ready (of directly accepted papers): TBA

  • Final decision (of revised papers): October 17, 2025

  • Camera-ready (of accepted revised papers): TBA

Second submission cycle

  • (Optional Mandatory) Abstract: July 11, 2025

  • Submission: July 18, 2025

  • Author response period (3 days): September 23-25, 2025

  • Notification: October 17, 2025

  • Revision due: November 14, 2025

  • Camera-ready (of directly accepted papers): TBA

  • Final decision (of revised papers): December 19, 2025

  • Camera-ready (of accepted revised papers): TBA

All dates are 23:59:59 AoE (UTC-12h).

Review Criteria

Each paper submitted to the Research Track will be evaluated based on the following criteria:

i) Novelty: The novelty and innovativeness of contributed solutions, problem formulations, methodologies, theories, and/or evaluations, i.e., the extent to which the paper is sufficiently original with respect to the state-of-the-art.

ii) Rigor: The soundness, clarity, and depth of a technical or theoretical contribution, and the level of thoroughness and completeness of an evaluation.

iii) Relevance: The significance and/or potential impact of the research on the field of software engineering.

iv) Verifiability and Transparency: The extent to which the paper includes sufficient information to understand how an innovation works; to understand how data was obtained, analyzed, and interpreted; and how the paper supports independent verification or replication of the paper’s claimed contributions. Any artifacts attached to or linked from the paper will be checked by one reviewer.

v) Presentation: The clarity of the exposition in the paper.

Reviewers will carefully consider all of the above criteria during the review process, and authors should take great care in clearly addressing them all. The paper should clearly explain and justify the claimed contributions. Each paper will be handled by an area chair who will ensure reviewing consistency among papers submitted within that area.

The outcome of each paper will be one of the following Accept, Revision, Reject. We now elaborate on the Revision outcome in the following.

Revisions

Papers submitted can go through revisions in response to specific revision requests made by the reviewers. Authors of papers receiving a Revision decision are expected to submit the revised papers, as well as the revised papers with changes marked in a different color, such as using LaTeXdiff. The authors also need to submit an “Author Response” document capturing the authors’ response to each reviewer’s comment and how those comments were addressed in the revision. This is similar to the “Summary of Changes and Response” document that is typically submitted by authors for a journal paper’s major revision. Authors may use the revision opportunity to revise and improve the paper, but should not use this to submit a substantially different paper. The reviewers will check the revised paper against the original paper and the suggested changes. Revised papers will be examined by the same set of reviewers. An unsatisfactory revised paper will be rejected. Authors are given approximately 4 weeks to submit the revised papers. Authors are given an additional page of text in a revised paper to accommodate the required changes specified in the reviews.

Re-submissions of Rejected Papers

Authors of papers that receive a REJECT decision in the first submission cycle are strongly discouraged from re-submitting it to the second submission cycle. However, in exceptional cases where the authors feel that the reviewers misunderstood their paper, authors can re-submit their paper to the second submission cycle with a “Clarifications and Summary of Improvements” document stating how they have changed the paper. They should also include the past reviews as part of this document, for completeness. These papers will be treated as new submissions, which may or may not get the same set of reviewers at the discretion of the PC chairs. Authors who try to bypass this guideline (e.g., by changing the paper title without significantly changing paper content, or by making small changes to the paper content) will have their papers desk-rejected by the PC chairs without further consideration.

Submission Process

All submissions must be in PDF format and conform, at time of submission, to the official “ACM Primary Article Template”, which can be obtained from the ACM Proceedings Template page. LaTeX users should use the sigconf option, as well as the review (to produce line numbers for easy reference by the reviewers) and anonymous (omitting author names) options. To that end, the following LaTeX code can be placed at the start of the LaTeX document:

\documentclass[sigconf,review,anonymous]{acmart}

  • All submissions must not exceed 10 pages for the main text, inclusive of all figures, tables, appendices, etc. Two more pages containing only references are permitted. All submissions must be in PDF. Accepted papers will be allowed one extra page for the main text of the camera-ready version.

  • Submissions must strictly conform to the ACM conference proceedings formatting instructions specified above. Alterations of spacing, font size, and other changes that deviate from the instructions may result in desk rejection without further review.

  • By submitting to the ICSE Technical Track, authors acknowledge that they are aware of and agree to be bound by the ACM Policy and Procedures on Plagiarism and the IEEE Plagiarism FAQ. In particular, papers submitted to ICSE 2026 must not have been published elsewhere and must not be under review or submitted for review elsewhere whilst under consideration for ICSE 2026. Contravention of this concurrent submission policy will be deemed a serious breach of scientific ethics, and appropriate action will be taken in all such cases. To check for double submission and plagiarism issues, the chairs reserve the right to (1) share the list of submissions with the PC Chairs of other conferences with overlapping review periods and (2) use external plagiarism detection software, under contract to the ACM or IEEE, to detect violations of these policies.

  • If the research involves human participants/subjects, the authors must adhere to the ACM Publications Policy on Research Involving Human Participants and Subjects. Upon submitting, authors will declare their compliance with such a policy. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of your paper, in addition to other potential penalties, as per ACM Publications Policy.

  • Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process for your accepted paper. ACM and IEEE have been involved in ORCID and may collect ORCID IDs from all published authors. We are committed to improve author discoverability, ensure proper attribution and contribute to ongoing community efforts around name normalization; your ORCID ID will help in these efforts.

  • The ICSE 2026 Research Track will employ a double-anonymous review process. Thus, no submission may reveal its authors’ identities. The authors must make every effort to honor the double-anonymous review process. In particular:

    • Authors’ names must be omitted from the submission.

    • All references to the author’s prior work should be in the third person.

    • While authors have the right to upload preprints on ArXiV or similar sites, they must avoid specifying that the manuscript was submitted to ICSE 2026.

    • All communication with the program committee must go through the program committee chairs. Do not contact individual program committee members regarding your submission.

  • Further advice, guidance, and explanation about the double-anonymous review process can be found on the Q&A page.

  • By submitting to the ICSE Research Track, authors acknowledge that they conform to the authorship policy of the IEEE, submission policy of the IEEE, and the authorship policy of the ACM (and associated FAQ). This includes following these points related to the use of Generative AI:

    • “Generative AI tools and technologies, such as ChatGPT, may not be listed as authors of an ACM published Work. The use of generative AI tools and technologies to create content is permitted but must be fully disclosed in the Work. For example, the authors could include the following statement in the Acknowledgements section of the Work: ChatGPT was utilized to generate sections of this Work, including text, tables, graphs, code, data, citations, etc.). If you are uncertain ­about the need to disclose the use of a particular tool, err on the side of caution, and include a disclosure in the acknowledgements section of the Work.” - ACM

    • “The use of artificial intelligence (AI)–generated text in an article shall be disclosed in the acknowledgements section of any paper submitted to an IEEE Conference or Periodical. The sections of the paper that use AI-generated text shall have a citation to the AI system used to generate the text.” - IEEE

    • “If you are using generative AI software tools to edit and improve the quality of your existing text in much the same way you would use a typing assistant like Grammarly to improve spelling, grammar, punctuation, clarity, engagement or to use a basic word processing system to correct spelling or grammar, it is not necessary to disclose such usage of these tools in your Work.” - ACM

Submissions to the Technical Track that meet the above requirements can be made via the Research Track submission site by the submission deadline. Any submission that does not comply with these requirements may be desk rejected without further review.

Submission site: https://icse2026.hotcrp.com/

We encourage the authors to upload their paper info early (and can submit the PDF later) to properly enter conflicts for double-anonymous reviewing. It is the sole responsibility of the authors to ensure that the formatting guidelines, double anonymous guidelines, and any other submission guidelines are met at the time of paper submission.

IEEE Transactions on Software Engineering, ACM Transactions on Software Engineering and Methodology and ICSE 2026, have received approval from the ICSE Steering Committee to launch are participating in the Sustainable Community Review Effort (SCRE) program, aimed at reducing community effort in reviewing journal extensions of conference papers and allowing authors to get faster and more consistent feedback. More information is available at: http://tinyurl.com/icse25-scre

Open Science Policy

The research track of ICSE 2026 is governed by the ICSE 2026 Open Science policies. The guiding principle is that all research results should be accessible to the public and, if possible, empirical studies should be reproducible. In particular, we actively support the adoption of open artifacts and open source principles. We encourage all contributing authors to disclose (anonymized and curated) data/artifacts to increase reproducibility and replicability. Note that sharing research artifacts is not mandatory for submission or acceptance. However, sharing is expected to be the default, and non-sharing needs to be justified. We recognize that reproducibility or replicability is not a goal in qualitative research and that, similar to industrial studies, qualitative studies often face challenges in sharing research data. For guidelines on how to report qualitative research to ensure the assessment of the reliability and credibility of research results, see this curated Q&A page.

Upon submission to the research track, authors are asked

  • to make their artifact available to the program committee (via upload of supplemental material or a link to an anonymous repository) – and provide instructions on how to access this data in the paper; or

  • to include in the submission an explanation as to why this is not possible or desirable; and

  • to indicate in the submission why they do not intend to make their data or study materials publicly available upon acceptance, if that is the case. The default understanding is that the data and/or other artifacts will be publicly available upon acceptance of a paper.

Withdrawing a Paper

Authors can withdraw their paper at any moment until the final decision has been made, through the paper submission system. Resubmitting the paper to another venue before the final decision has been made without withdrawing from ICSE 2026 first is considered a violation of the concurrent submission policy, and will lead to automatic rejection from ICSE 2026 as well as any other venue adhering to this policy. Such violations may also be reported to appropriate organizations e.g. ACM and IEEE.

Conference Attendance Expectation

If a submission is accepted, at least one author of the paper is required to register for ICSE 2026 and present the paper. We are assuming that the conference will be in-person, and if it is virtual or hybrid, virtual presentations may be possible. These matters will be discussed with the authors closer to the date of the conference.