The International Conference on Compiler Construction (CC) is interested in work on processing programs in the most general sense: analyzing, transforming or executing input that describes how a system operates, including traditional compiler construction as a special case.
CC is an ACM SIGPLAN conference, and implements guidelines and procedures recommended by SIGPLAN.
For more information, please consult the Call for Papers.
Sat 1 MarDisplayed time zone: Pacific Time (US & Canada) change
09:00 - 10:00 | KeynoteMain Conference at Acacia A Chair(s): Jens Palsberg University of California, Los Angeles (UCLA) | ||
09:00 60mKeynote | Compiler Optimization: Challenges and Opportunities Main Conference Saday Sadayappan University of Utah, USA |
10:00 - 10:30 | |||
10:00 30mBreak | Morning Break I Main Conference |
10:30 - 12:00 | Compilers and OptimizationMain Conference at Acacia A Chair(s): Jens Palsberg University of California, Los Angeles (UCLA) | ||
10:30 30mTalk | pyATF: Constraint-Based Auto-Tuning in Python Main Conference Richard Schulze University of Muenster, Sergei Gorlatch University of Muenster, Ari Rasch University of Muenster Link to publication DOI Pre-print Media Attached | ||
11:00 30mTalk | Overloading the Dot Main Conference | ||
11:30 30mTalk | Fusion of Operators of Computational Graphs via Greedy Clustering: The XNNC Experience Main Conference Michael Canesche Cadence Design Systems, Vanderson Martins do Rosario Cadence Design Systems, Edson Borin State University of Campinas, Fernando Magno Quintão Pereira Federal University of Minas Gerais |
12:00 - 13:00 | |||
12:00 60mLunch | Lunch Main Conference |
13:00 - 14:00 | |||
13:00 30mTalk | Enhancing Program Analysis with Deterministic Distinguishable Calling Context Main Conference Sungkeun Kim Texas A & M University, Jaewoo Lee Texas A&M University, Khanh Nguyen Texas A&M University, Chia-Che Tsai Texas A&M University, Eun Jung Kim , Abdullah Muzahid Texas A & M University | ||
13:30 30mTalk | Scalable Data-Flow Modeling and Validation of Distributed-Memory Algorithms Main Conference |
14:00 - 15:30 | |||
14:00 30mTalk | DFA-Net: A Compiler-Specific Neural Architecture for Robust Generalization in Data Flow Analyses Main Conference Alexander Brauckmann University of Edinburgh, Anderson Faustino da Silva State University of Maringá, Jeronimo Castrillon TU Dresden, Germany, Hugh Leather Meta AI Research | ||
14:30 30mTalk | Finding Missed Code Size Optimizations in Compilers using Large Language Models Main Conference | ||
15:00 30mTalk | LLM Compiler: Foundation Language Models for Compiler Optimization Main Conference Chris Cummins Meta, Volker Seeker Meta AI Research, Dejan Grubisic Meta, Baptiste Rozière Meta, Jonas Gehring Meta, Gabriel Synnaeve Meta, Hugh Leather Meta AI Research |
15:30 - 16:00 | |||
16:00 - 18:00 | |||
16:00 30mTalk | A Comparative Study on the Accuracy and the Speed of Static and Dynamic Program Classifiers Main Conference Anderson Faustino da Silva State University of Maringá, Jeronimo Castrillon TU Dresden, Germany, Fernando Magno Quintão Pereira Federal University of Minas Gerais | ||
16:30 30mTalk | Biotite: A High-Performance Static Binary Translator using Source-Level Information Main Conference Changbin Chen The University of Tokyo, Shu Sugita University of Tokyo, Yotaro Nada The University of Tokyo, Hidetsugu Irie University of Tokyo, Shuichi Sakai University of Tokyo, Ryota Shioya University of Tokyo | ||
17:00 30mTalk | Post-Link Outlining for Code Size Reduction Main Conference shaobai yuan Hunan University, Jihong He Hunan University, Yihui Xie Hunan University, Feng Wang Hunan University, Jie Zhao Hunan University | ||
17:30 30mTalk | A Deep Technical Review of nZDC Fault Tolerance Main Conference Minli Liao University of Cambridge, Sam Ainsworth University of Edinburgh, Lev Mukhanov Queen Mary University London, Timothy M. Jones University of Cambridge Pre-print Media Attached |
Sun 2 MarDisplayed time zone: Pacific Time (US & Canada) change
09:00 - 10:00 | Machine Learning and PL IIMain Conference at Bristlecone_ Chair(s): Fernando Magno Quintão Pereira Federal University of Minas Gerais | ||
09:00 30mTalk | Data-efficient Performance Modeling via Pre-training Main Conference | ||
09:30 30mTalk | MimIrADe: Automatic Differentiation in a Higher-Order Sea-of-Nodes IR Main Conference Marcel Ullrich Saarland University, Sebastian Hack Saarland University, Saarland Informatics Campus, Roland Leißa University of Mannheim, School of Business Informatics and Mathematics Link to publication |
10:00 - 10:30 | |||
10:30 - 12:00 | Binary Analysis and Hardware IIMain Conference at Bristlecone_ Chair(s): Louis-Noël Pouchet Colorado State University, USA | ||
10:30 30mTalk | Compiler Support for Speculation in Decoupled Access/Execute Architectures Main Conference Robert Szafarczyk University of Glasgow, Syed Waqar Nabi University of Glasgow, Wim Vanderbauwhede University of Glasgow DOI Pre-print | ||
11:00 30mTalk | Secure Scripting with CHERIoT MicroPython Main Conference Duncan Lowther University of Glasgow, Dejice Jacob University of Glasgow, Jacob Trevor University of Glasgow, Jeremy Singer University of Glasgow DOI Pre-print | ||
11:30 30mTalk | Automatic Test Case Generation for Jasper App HDL Compiler: An Industry Experience Main Conference Mirlaine Crepalde Cadence Design Systems, Augusto Mafra Cadence Design Systems, Lucas Pereira Cavalini Cadence Design Systems, Lucas Martins Cadence Design Systems, Guilherme Amorim Cadence Design Systems, Pedro Henrique Santos Cadence Design Systems, Fabiano Peixoto Cadence Design Systems |
12:00 - 13:00 | |||
12:00 60mLunch | Lunch Main Conference |
Unscheduled Events
Not scheduled Coffee break | Morning Break II Main Conference | ||
Not scheduled Coffee break | Afternoon Break Main Conference |
Accepted Papers
Artifact Submission Guidelines
Components of the Artifact
-
The submission version of your paper/poster.
-
A README file (PDF or plaintext format) that explains your artifact (details below).
-
The artifact itself, packaged as a single archive file. Artifacts less than 600MB can be directly uploaded to the hotCRP submission site; for files larger than 600MB, please provide a URL pointing to the artifact; the URL must protect the anonymity of the reviewers. Please use a widely available compressed archive format such as ZIP (.zip), tar and gzip (.tgz), or tar and bzip2 (.tbz2). Ensure the file has the suffix indicating its format.
-
Those seeking the “Available” badge must follow ACM’s appropriate instructions on uploading the archive to a publicly available, immutable location to receive the badge.
General Guidelines
First and foremost, please try to simplify the entire artifact evaluation process as much as possible. Installing the artifact and reproducing any results of the paper are two completely different aspects. For the installation step, providing a “push-button approach” is ideal, that is, a virtual machine or docker image, or scripts that automate the installation of all the dependencies together with the compilation and build commands for an operating system (often Linux). Admin or superuser privileges should not be necessary. Please try to include all the datasets and benchmarks used. If scripts and installation is necessary, it is very possible that we will not ask the evaluators to spend excessive time in this step. Consequently, it is in the best interest of the authors to provide easy instructions. Debugging the installation of the artifact is not the evaluators job.
If the authors wish to apply for the “Results Reproduced” (Blue Badge) and if the experimental test-bed requires specialized hardware, please consider providing open access to a non-tracked testbed that will be used. Unfortunately, we cannot promise an attempt to reproduce the results if the necessary hardware is inaccessible or unavailable to our evaluators.
Artifact Documentation Guidelines
Please include the following in your artifact’s documentation:
-
Brief description of the artifact: One or two paragraphs describing the artifact. Please don’t rephrase the paper’s abstract. It should give a sense of the type of artifact and evaluators background. For example, an artifact implemented as an LLVM pass to perform datarace detection. List similar tools (helps assigning of artifact to evaluators), and describe the overall behavior of the artifact (inputs and outputs). Any other information that could help on quickly understanding what the artifact does.
-
Hardware requisites: Laptop, regular desktop, workstation, compute-server, GPU(s), FPGAs or something more specialized. Please provide the rough number of compute cores needed and memory (DRAM) required (e.g., 32 GB).
-
Software pre-requisites: Operating system (Linux, Windows, Mac) and version (e.g. Ubuntu Linux LTS 2020); set of compiler and runtime versions (e.g. GCC 8.1, OpenMP 5.1, CUDA 9.0, etc). Please don’t assume that the evaluators will have CMake, Python or any other tool installed. Your complete artifact should provide this.
-
Description of your expectations: Approximated time to install, run/use the artifact and (if requested) to reproduce the results.
NOTE: The AE will not invest substantial amount of time or effort on debugging the installation and use process. The information provided will be for internal use. Submitting the Full Artifact:
Outcomes
As discussed above, only two types of badges will be awarded in this evaluation: red (Artifact Evaluated) and/or the blue (Results Reproduced). In the submission form you will state the badges you are applying for: red, blue, or both. Note that the blue badge will also require installing and using the artifact. The installation and execution of experiments has to be vastly stream-lined. Please don’t apply for the blue badge if you are only interested in demonstrating that the artifact is functional (light-red badge) or reusable (dark-red badge). If you select the red badge, the evaluators will make no attempt to reproduce the results.
For the red badges, your artifact must be: Documented, Consistent, Complete and Exercisable. For more details, please see the ACM Artifact Description.
If you are applying for the blue badge, include scripts to execute, gather, and plot the main results of your paper. The README must include a statement or paragraph describing the criteria for interpreting and deeming the results similar enough. Authors can include in the Artifact Description or in a separate PDF an excerpt of the paper with the plot/graph/table attempting to be reproduced.
Other suggestions:
-
Clearly identify the hardware and software system pre-requisites: number of cores, memory, operating system, compiler used (with version), python version, etc. This helps us also select evaluators. We may decline evaluating an artifact if the pre-requisites are too complicated to satisfy.
-
Include scripts to perform all the required tasks
-
Consider using a docker or VM image to simplify all steps. It’s recommended to provide the final artifact via Zenodo, figshare, Dryad or similar archival site.
-
Describe the steps to follow in the README file.
-
If applying for the blue badge, briefly and succinctly describe the result being reproduced together with reasonable expectations on possible variations of the results. Such variations can arise owing to the evaluators using a slightly different test bed than that recommended by the authors.
-
At least one author should be designated as the point-of-contact (PoC) for possible clarifications. We expect any required clarification to be resolved within 24 hours of making the request. All communication will be done through Hotcrp.
Call for Papers
The ACM SIGPLAN 2025 International Conference on Compiler Construction (CC 2025) is interested in work on processing programs in the most general sense: analyzing, transforming or executing input programs that describe how a system operates, including traditional compiler construction as a special case.
Original contributions are solicited on the topics of interest which include, but are not limited to:
-
Compilation and interpretation techniques, including program representation, analysis, and transformation; code generation, optimization, and synthesis; the verification thereof
-
Run-time techniques, including memory management, virtual machines, and dynamic and just-in-time compilation
-
Programming tools, including refactoring editors, checkers, verifiers, compilers, debuggers, and profilers
-
Techniques, ranging from programming languages to micro-architectural support, for specific domains such as secure, parallel, distributed, embedded or mobile environments
-
Design and implementation of novel language constructs, programming models, and domain-specific languages
-
Implications to compiler construction from emerging or non-conventional applications (e.g., deep learning, quantum computing, DNA computing, etc.)
CC is an ACM SIGPLAN conference and implements guidelines and procedures recommended by SIGPLAN. Prospective authors should be aware of ACM’s Copyright policies. Proceedings will be made available online in the ACM digital library from one week before to one week after the conference.
Call for Tool and Practical Experience Papers
This year, CC will offer a second category of papers called “Tools and Practical Experience”. Papers in this category must either give a clear account of a tool’s functionality or summarize a practical experience with realistic case studies. The successful evaluation of an artifact is mandatory for a Tool Paper. Therefore, authors of work conditionally accepted as Tool Papers must submit an artifact to the Artifact Evaluation Committee. The successful evaluation of the artifact is a requirement for final acceptance.
Practical experience papers are encouraged, but not required, to submit an artifact to the Artifact Evaluation process.
The selection criteria for papers in this category are:
-
Originality: Papers should present CC-related technologies applied to real-world problems with scope or characteristics that set them apart from previous solutions.
-
Usability: The presented tools or compilers should have broad usage or applicability. They are expected to assist in CC-related research, or could be extended to investigate or demonstrate new technologies. If significant components are not yet implemented, the paper will not be considered.
-
Documentation: The tool or compiler should be presented on a website giving documentation and further information about the tool.
-
Benchmark Repository: A suite of benchmarks for testing should be provided.
-
Availability: The tool or compiler should be available for public use.
-
Foundations: Papers should incorporate the principles underpinning Compiler Construction (CC). However, a thorough discussion of theoretical foundations is not required; a summary of such should suffice.
-
Artifact Evaluation: The submitted artifact must be functional and support the claims made in the paper. Submission of an artifact is mandatory for papers presenting a tool.
Tool and Practical Experience papers abide by the same limit of 10 pages in the ACM format, references excluded, and are not distinguished in the final proceedings. We encourage shorter submissions that give an account of how scientific ideas have been incorporated and used in practice.
Submission Guidelines
Submission site: https://cc25.hotcrp.com
All submissions must be made electronically through the conference submission website and include an abstract (100–400 words), author contact information, the full list of authors and their affiliations. Full paper submissions must be in PDF formatted printable on US letter size paper.
All papers must be prepared in ACM Conference Format using the 2-column acmart format: use the options \documentclass[sigplan,10pt,review,anonymous]{acmart} for Latex, and interim-layout.docx for Word. Important note: The Word template (interim-layout.docx) on the ACM website uses 9pt font; you need to increase it to 10pt.
Papers should contain a maximum of 10 pages of text (in a typeface no smaller than 10 point) or figures, NOT INCLUDING references. There is no page limit for references and they must include the name of all authors (do not use et al.).
Appendices are not allowed, but the authors may submit anonymous supplementary material, such as proofs, source code, or data sets; all supplementary material must be in PDF or ZIP format. Looking at supplementary material is at the discretion of the reviewers.
Papers may be resubmitted to the submission site multiple times up until the deadline, but the last version submitted before the deadline will be the version reviewed. Papers that exceed the length requirement, that deviate from the expected format, or that are submitted late will be rejected.
CC follows ACM’s Copyright Policies. Prospective authors should adhere to SIGPLAN’s Republication Policy and to ACM’s Policy and Procedures on Plagiarism.
By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including ACM’s new Publications Policy on Research Involving Human Participants and Subjects. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of your paper, in addition to other potential penalties, as per ACM Publications Policy.
Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process for your accepted paper. ACM has been involved in ORCID from the start and we have recently made a commitment to collect ORCID IDs from all of our published authors. The collection process has started and will roll out as a requirement throughout 2022. We are committed to improve author discoverability, ensure proper attribution and contribute to ongoing community efforts around name normalization; your ORCID ID will help in these efforts.
Double-Blind Reviewing Process
CC uses a double-blind reviewing process. Authors will need to identify any potential conflicts of interest with PC, as defined in the SIGPLAN policy.
To facilitate the double-blind reviewing process, submissions (including supplementary material) should not reveal the identity of the authors in any way. Authors should leave out author names and affiliations from the body of their submission. They should also ensure that any references to authors’ own related work should be in the third person (e.g., not “We build on our previous work …” but rather “We build on the work of …”).
The purpose of this process is to help the PC and external reviewers come to an initial judgment about the paper without bias, not to make it impossible for them to discover the authors if they were to try. Nothing should be done in the name of anonymity that weakens the submission or makes the job of reviewing the paper more difficult. In particular, important background references should not be omitted or anonymized. Artifact Evaluation
Authors are encouraged to submit their artifacts for the Artifact Evaluation (AE). The Artifact Evaluation process begins after the acceptance notification, and is run by a separate committee whose task is to assess how the artifacts support the work described in the papers.
To ease the organization of the AE committee, we kindly ask authors to indicate at the time they submit the paper, whether they are interested in submitting an artifact.
Papers that go through the Artifact Evaluation process successfully will receive a seal of approval printed on the papers themselves.
Authors of accepted papers are encouraged, but not required, to make these materials publicly available upon publication of the proceedings, by including them as “source materials” in the ACM Digital Library.
Additional information will be made available later.
Publication Date
AUTHORS TAKE NOTE: The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of your conference. The official publication date affects the deadline for any patent filings related to published work.
Artifact Information
Overview
Authors of accepted CC 2025 papers are invited to formally submit their supporting materials to the Artifact Evaluation (AE) process. The Artifact Evaluation Committee attempts to reproduce experiments (in broad strokes) and assess if submitted artifacts support the claims made in the paper. The submission is voluntary and does not influence the final decision regarding paper/poster acceptance.
We invite every author of an accepted CC paper to consider submitting an artifact. It is good for the community as a whole. At CC, we follow ACM’s artifact reviewing and badging policy. ACM describes a research artifact as follows:
“By “artifact” we mean a digital object that was either created by the authors to be used as part of the study or generated by the experiment itself. For example, artifacts can be software systems, scripts used to run experiments, input datasets, raw data collected in the experiment, or scripts used to analyze results.”
Badge Types
The artifact evaluation process is single-blinded. Hence, we kindly request the authors to disable any form of analytics, tracking and logging on the sites and services used to share the artifact with the reviewers. Each submitted artifact is evaluated by at least one member of the artifact evaluation committee. Ideally, we try to have two reviewers for each artifact.
During the process, authors and evaluators are allowed to anonymously communicate with each other to overcome technical difficulties. Ideally, we hope to see all submitted artifacts to successfully pass the artifact evaluation.
Evaluators are asked to test the functionality and claims associated to the artifact of accepted papers. We will follow the ACM badge award criteria.
ACM recommends awarding three different types of badges to communicate how the artifact has been evaluated. A single paper can receive up to three badges — one badge of each type.
At CC the artifact evaluation committee will award two types of badges:
-
Red badge (Artifact Usable): Depending on the usability and robustness of the artifact, either the Light-red badge or the Dark-red badge will be awarded. The latter badge will be granted when the artifact far exceeds usability expectations.
-
Blue badge (Results replicated): If the main paper results can be reproduced with the authors provided artifact.
-
The Green Artifact Available Badge: This badge is awarded if the artifact is publicly available. This badge is awarded directly by the publisher — if the authors provide a link to the deposited artifact.
Note that the variation of empirical and numerical results is tolerated. In fact, it is often unavoidable in computer systems research - see “How to report and compare empirical results?” in AE FAQ on https://www.ctuning.org.