ReCode 2026
Sun 12 - Sat 18 April 2026 Rio de Janeiro, Brazil
co-located with ICSE 2026

The 1st International Workshop on Code Translation, Transformation, and Modernization (April 13, 2026)

Workshop Overview

The ReCode workshop aims to bring together researchers and practitioners from academia and industry to address the challenges and opportunities in translating/migrating code across languages, refactoring legacy systems, evolving software architectures, and enabling seamless modernization. We encourage contributions that bridge theory and practice, introduce reusable frameworks, and demonstrate successful applications in real-world modernization scenarios. The workshop will consist of invited talks, presentations based on research papers, and a panel discussion, where all participants are invited to share their insights and ideas to identify a research roadmap.

Why attend the Recode workshop?

  1. We have planned speakers from big Tech to discuss the challenges and opportunities concerning code translation, transformation, and migration at a large scale. Along with academic attendance, such talks and discussions can form the future of a research area and expedite advancements in the field.

  2. Student authors of the accepted papers and workshop registrants can submit their CVs to our database for internship/full-time opportunities, which we plan to share with industry participants (please note that this does not guarantee an interview or position). Once you register for the workshop, or if your submission is accepted to appear in the workshop proceedings, please submit your CV to our database: https://forms.gle/BobDr2qAyjDWrrmt7

  3. The workshop will grant Best Paper Awards to the top accepted research papers. More information about the award will be added later.

Call for Papers

The ReCode workshop will welcome six categories of submissions: (1) research papers (6 pages), (2) position papers (up to 4 pages), (3) industry/experience reports (6 pages), (4) education and training papers (6 pages), (5) benchmarks, and (6) extended abstracts (five pages).

All the accepted papers will appear in the proceedings of the ICSE’26 workshops. We also provide a non-archival option at the time of submission for the authors who prefer not to have their papers in the proceedings. At least one author of each accepted paper should register for the workshop and present the paper in the workshop.

We welcome research related to workshop topics. We are interested in theoretical or empirical papers that explore one or more of the following perspectives (please reach out to the organizers if you would like to submit a paper with a relevant topic that is not listed here):

Code Translation

  1. New techniques for code translation
  2. Assessing state-of-the-art C to Rust translations
  3. Code translation validation
  4. Benchmarking code translation
  5. LLM-based code translation
  6. Code transpilation
  7. Neuro-symbolic code translation
  8. Metrics for evaluating code translation
  9. Cross-language equivalence

Code Refactoring

  1. Automated refactoring
  2. Refactoring for code translation
  3. Refactoring for application modernization
  4. Refactoring large-scale projects

Application Modernization/Migration

  1. Architecture modernization
  2. Code modernization
  3. Monolithic to microservice transformation
  4. Modernization validation

Submission Process

All submissions must be in PDF format and conform, at time of submission, to the official “ACM Primary Article Template”, which can be obtained from the ACM Proceedings Template page. LaTeX users should use the sigconf option, as well as the review (to produce line numbers for easy reference by the reviewers) and anonymous (omitting author names) options. To that end, the following LaTeX code can be placed at the start of the LaTeX document: \documentclass[sigconf,review,anonymous]{acmart}.

Submissions must strictly conform to the ACM conference proceedings formatting instructions specified above. Alterations of spacing, font size, and other changes that deviate from the instructions may result in desk rejection without further review.

ReCode employs a double-anonymous review process. Thus, no submission may reveal its authors’ identities. The authors must make every effort to honor the double-anonymous review process. Further advice, guidance, and explanation about the double-anonymous review process can be found on the ICSE conference Q&A page.

We are excited to have a series of outstanding keynotes as part of the workshop.

Keynote Speaker 1: Omer Tripp (AWS)

Title: The Road to True Software Modernization with Autonomous Agents

Abstract: Code modernization is difficult and tedious, yet extremely important. Historically this challenge has mostly been suppressed, and taken up only when absolutely necessary, thus contributing to growing technical debt. As evidence for this statement, there are roughly 800 billions lines of COBOL code currently running in production systems.

The rise of AI has given us hope and opportunity to pay off this mounting tower of technical debt, where there are many possible approaches. In my keynote, I will examine the challenge of treating software modernization as a long-range task for autonomous agents. While there are versions of this problem that are more tractable, such as automated patching to run existing code on newer runtimes, there is a substantial gap between this narrow scope and the higher-stakes goal of true modernization. Full modernization requires upgrading dependencies, replacing deprecated APIs, and adapting architectures. These tasks demand deep semantic understanding, consistent reasoning across large codebases, and multi-step refactorings.

I will highlight why automated upgrades pose a uniquely difficult problem when full autonomy and full modernization meet, and why conventional signals of success – e.g., a build that succeeds, or a test suite that passes – are insufficient as observability metrics. Instead, richer dimensions of correctness, and stronger notions of semantic compatibility, are needed to evaluate progress are needed to evaluate progress. The talk will also share patterns and architectural considerations for designing agentic workflows that can carry out these transformations, along with lessons learned from my experience in this space.

Bio: Omer Tripp is a Principal Applied Scientist at AWS. Omer has led the science team behind Q Code Transformation and earlier also the science work powering Amazon CodeGuru, where currently Omer is a technical leader in the Proactive Security organization, defining the vision and guiding the transition towards AI-powered security processes. Omer’s research work is at the intersection of programming languages and AI/ML, where he has published >70 scientific papers and is the inventor of hundreds of patents.

Keynote Speaker 2: Satish Chandra (Meta)

Title: TBA

Abstract: TBA

Bio: TBA

Keynote Speaker 3: Celal Ziftci (Google)

Title: Beyond Code Completion: Harnessing LLMs for Complex Code Migrations in an Enterprise Setting at Google

Abstract: Large Language Models (LLMs) are rapidly transforming software development, with applications expanding beyond familiar code completion tools. This talk explores the use of LLMs to tackle the challenging and often costly task of code migration within a large enterprise. We will present Google’s experience in developing and deploying LLM-driven solutions to automate complex code changes, including updating APIs, modernizing legacy code, and ensuring consistency across vast codebases.

We will showcase how a combination of LLMs, code analysis, and developer workflows can achieve significant efficiency gains, reducing migration time significantly and enabling the completion of previously intractable projects.

Furthermore, we will discuss practical considerations for adopting LLM-based migration tools, including strategies for prompt engineering, validation and testing, and the crucial role of human oversight in ensuring successful outcomes. By sharing lessons learned and real-world case studies, this talk aims to provide valuable insights for practitioners seeking to leverage the power of LLMs to modernize and maintain large software systems.

Bio: Dr. Celal Ziftci is a seasoned software leader at Google’s New York office with over fourteen years of industry experience. He received his PhD in Computer Science from the University of California, San Diego, and his MSc from the University of Illinois Urbana-Champaign.

His research interests include software development, software testing, software analytics, program analysis, and the application of data mining and machine learning to improve software development processes.

Dr. Ziftci contributed to advancing software development through research and innovation. He has published in leading international conferences on these topics (ICSE / ASE / FSE), and served on their committees (FSE’2025, ICSE’2024, ICST’2023, ICST’2022).

At Google, Dr. Ziftci focuses on leveraging Generative AI, particularly Large Language Models and Agentic AI, to enhance developer productivity and software engineering efficiency. He is committed to driving the adoption of AI and automation to improve software engineering practices.