Write a Blog >>
CGO 2022
Sat 2 - Wed 6 April 2022
Dates
You're viewing the program in a time zone which is different from your device's time zone - change time zone

Mon 4 Apr

Displayed time zone: Eastern Time (US & Canada) change

08:45 - 09:00
09:00 - 10:00
Keynote (PPoPP)Main Conference

Many Real-World Challenges for Effective Programming of Heterogeneous Systems

Heterogeneous Systems offer tremendous opportunities through hardware innovation, but this leaves a lot unanswered in regards to ‘how will we program them.’ SYCL is a Khronos standard to extend C++ for Heterogeneous Programming, and is instructive to review in terms of the practical problems inherent in extending programming for heterogeneous systems. James will discuss SYCL in order to expose key challenges, and discuss real unsolved problems that stand in the way of ‘standard parallelism’ solving this in C++ and many other programming languages.

Speaker: James Reinders (Intel)

James Reinders is an engineer at Intel focused on enabling parallel programming in a heterogeneous world. James has helped create ten technical books related to parallel programming; his latest book is about SYCL (free download: https://www.apress.com/book/9781484255735). He has had the great fortune to help make key contributions to two of the world’s fastest computers (#1 on Top500 list) as well as many other supercomputers, and software developer tools.

10:00 - 10:20
10:20 - 11:20
Session #1: GPUMain Conference
Chair(s): Madan Musuvathi Microsoft Research
10:20
15m
Talk
A Compiler Framework for Optimizing Dynamic Parallelism on GPUsArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Mhd Ghaith Olabi American University of Beirut, Juan Gómez Luna ETH Zurich, Onur Mutlu ETH Zurich, Wen-mei Hwu University of Illinois at Urbana-Champaign, Izzat El Hajj American University of Beirut
Link to publication
10:35
15m
Talk
Automatic Horizontal Fusion for GPU KernelsArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Ao Li Carnegie Mellon University, Bojian Zheng University of Toronto, Gennady Pekhimenko University of Toronto / Vector Institute, Fan Long University of Toronto, Canada
Link to publication
10:50
15m
Talk
DARM: Control-Flow Melding for SIMT Thread Divergence ReductionArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Charitha Saumya Purdue University, Kirshanthan Sundararajah Purdue University, Milind Kulkarni Purdue University
Link to publication
11:05
15m
Talk
Efficient Execution of OpenMP on GPUsArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Functional v1.1
Main Conference
Joseph Huber Oak Ridge National Laboratory, Melanie Cornelius Illinois Institute of Technology, Giorgis Georgakoudis Lawrence Livermore National Laboratory, Shilei Tian Stony Brook University, JoseM Monsalve Diaz Argonne National Laboratory, Kuter Dinel Düzce University, Barbara Chapman Stony Brook University, Johannes Doerfert Argonne National Laboratory
Link to publication
11:20 - 11:40
11:40 - 12:25
Session #2: PerformanceMain Conference
Chair(s): Charith Mendis MIT CSAIL
11:40
15m
Talk
CompilerGym: Robust, Performant Compiler Optimization Environments for AI ResearchArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Chris Cummins Facebook, Bram Wasti Facebook, Jiadong Guo Facebook, Brandon Cui Facebook, Jason Ansel Facebook, Sahir Gomez Facebook, Somya Jain Facebook, Jia Liu Facebook, Olivier Teytaud Facebook, Benoit Steiner Facebook, Yuandong Tian Facebook, Hugh Leather Facebook
Link to publication
11:55
15m
Talk
PALMED: Throughput Characterization for Superscalar ArchitecturesArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Nicolas Derumigny INRIA, Fabian Gruber Université Grenoble Alpes / INRIA Grenoble Rhônes-Alpes, Théophile Bastian INRIA, Christophe Guillon STMicroelectronics, Guillaume Iooss Inria, Louis-Noël Pouchet Colorado State University, Fabrice Rastello Inria, France
Link to publication
12:10
15m
Talk
SRTuner: Effective Compiler Optimization Customization By Exposing Synergistic RelationsArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Functional v1.1
Main Conference
Sunghyun Park University of Michigan, Seyyed Salar Latifi Oskouei University of Michigan, Yongjun Park Hanyang University, Armand Behroozi University of Michigan, Byungsoo Jeon Carnegie Mellon University, Scott Mahlke University of Michigan
Link to publication
12:25 - 12:50
12:50 - 13:35
Session #3: Domain-Specific CompilationMain Conference
Chair(s): Tobias Grosser University of Edinburgh
12:50
15m
Talk
GraphIt to CUDA compiler in 2021 LOC: A case for high-performance DSL implementation via staging with BuilDSLArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Ajay Brahmakshatriya Massachusetts Institute of Technology, Saman Amarasinghe Massachusetts Institute of Technology
Link to publication
13:05
15m
Talk
A Compiler for Sound Floating-Point Computations using Affine ArithmeticArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Joao Rivera ETH Zurich, Franz Franchetti Carnegie Mellon University, Markus Püschel ETH Zurich
Link to publication
13:20
15m
Talk
Aggregate Update Problem for Multi-Clocked Dataflow LanguagesArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Hannes Kallwies University of Lübeck, Martin Leucker University of Lübeck, Daniel Thorma University of Lübeck, Torben Scheffel University of Lübeck, Malte Schmitz University of Lübeck
Link to publication
13:35 - 13:45
13:45 - 14:45
Business MeetingMain Conference

Tue 5 Apr

Displayed time zone: Eastern Time (US & Canada) change

09:00 - 10:00
Keynote (CGO)Main Conference

Compiler 2.0

When I was a graduate student a long time ago, I used to have intense conversations and learned a lot from my peers in other areas of computer science as the program structure, systems, and algorithms used in my compiler were very similar to and inspired by many of the work done by my peers. For example, a Natural Language Recognition System that was developed by my peers, with a single sequential program with multiple passes connected through IRs that systematically transformed an audio stream into text, was structurally similar to the SUIF compiler I was developing. In the intervening 30 years, the information revolution brought us unprecedented advances in algorithms (e.g., machine learning and solvers), systems (e.g., multicores and cloud computing), and program structure (e.g., serverless and low-code frameworks). Thus, a modern NLP system such as Apple’s Siri or Amazon’s Alexa, a thin client on an edge device interfacing to a massively-parallel, cloud-based, centrally-trained Deep Neural Network, has little resemblance to its predecessors. However, the SUIF compiler is still eerily similar to a state-of-the-art modern compiler such as LLVM or MLIR. What happened with compiler construction technology? At worst, as a community, we have been Luddites to the information revolution even though our technology has been critical to it. At best, we have been unable to transfer our research innovations (e.g., polyhedral method or program synthesis) into production compilers. In this talk I hope to inspire the compiler community to radically rethink how to build next generation compilers by giving a few possible examples of using 21st century program structures, algorithms and systems in constructing a compiler.

Speaker: Saman Amarasinghe (MIT)

Saman Amarasinghe is a Professor in the Department of Electrical Engineering and Computer Science at Massachusetts Institute of Technology and a member of its Computer Science and Artificial Intelligence Laboratory (CSAIL) where he leads the Commit compiler group. Under Saman’s guidance, the Commit group developed the StreamIt, PetaBricks, Halide, Simit, MILK, Cimple, TACO, GraphIt, BioStream, CoLa and Seq programming languages and compilers, DynamoRIO, Helium, Tiramisu, Codon, StreamJIT and BuildIt compiler/runtime frameworks, Superword Level Parallelism (SLP), goSLP, VeGen and SuperVectorizer for vectorization, Ithemal machine learning based performance predictor, Program Shepherding to protect programs against external attacks, the OpenTuner extendable autotuner, and the Kendo deterministic execution system. He was the co-leader of the Raw architecture project. Saman was a co-founder of Determina, Lanka Internet Services, Venti Technologies, and DataCebo Corporations. Saman received his BS in Electrical Engineering and Computer Science from Cornell University in 1988, and his MSEE and Ph.D. from Stanford University in 1990 and 1997, respectively. He is an ACM Fellow.

10:00 - 10:20
11:20 - 11:40
11:40 - 12:10
Session #5: Natural-Language TechniquesMain Conference
Chair(s): Weng-Fai Wong National University of Singapore
11:40
15m
Talk
M3V: Multi-Modal Multi-View Context Embedding for Repair Operator Prediction
Main Conference
Xuezheng Xu UNSW Sydney, Xudong Wang UNSW Sydney, Jingling Xue UNSW Sydney
Link to publication
11:55
15m
Talk
Enabling Near Real-Time NLU-Driven Natural Language Programming through Dynamic Grammar Graph-Based Translation
Main Conference
Zifan Nan North Carolina State University, Xipeng Shen North Carolina State University; Facebook, Hui Guan University of Massachusetts, Amherst
Link to publication
12:10 - 12:50
12:50 - 13:35
Session #6: Binary TechniquesMain Conference
Chair(s): Wenwen Wang University of Georgia
12:50
15m
Talk
Recovering Container Class Types in C++ BinariesArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Xudong Wang UNSW Sydney, Xuezheng Xu UNSW Sydney, Qingan Li Wuhan University, China, Jingling Xue UNSW Sydney, Yuan Mengting Wuhan University
Link to publication
13:05
15m
Talk
Automatic Generation of Debug Headers through BlackBox Equivalence CheckingArtifact Available v1.1Artifacts Evaluated – Functional v1.1
Main Conference
Vaibhav Kiran Kurhe Indian Institute of Technology Delhi, Pratik Karia (Indian Institute of Technology Delhi, Shubhani Gupta Indian Institute of Technology Delhi, Abhishek Rose IIT Delhi, Sorav Bansal IIT Delhi and CompilerAI Labs
Link to publication
13:20
15m
Talk
Gadgets Splicing: Dynamic Binary Transformation for Precise RewritingArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Functional v1.1
Main Conference
Linan Tian Chinese Academy of Sciences, Yangyang Shi Chinese Academy of Sciences, Liwei Chen Chinese Academy of Sciences, Yanqi Yang Chinese Academy of Sciences, Gang Shi Chinese Academy of Sciences
Link to publication

Wed 6 Apr

Displayed time zone: Eastern Time (US & Canada) change

09:00 - 10:00
Keynote (HPCA)Main Conference

Integration, Specialization and Approximation: the “ISA” of Post-Moore Servers

Datacenters are growing at unprecedented speeds building a foundation for global IT services, cost-effective containerized apps and novel paradigms including microservices and serverless computing. At the same time, we are entering a new era in computing where scalability no longer comes from higher density in silicon fabrication processes. Now, more than ever server designers are in search of new avenues to bridge the gap between higher demands for scalability and the diminishing returns in server density. In this talk, I will go over the basic anatomy of system hardware and software in a modern server blade which is primarily derived from the CPU-centric desktop PC of the 80s. I will then present opportunities for a clean slate design of servers based on integration, specialization and approximation as three pillars to enable server scalability in the post-Moore era.

Speaker: Babak Falsafi (EcoCloud, EPFL)

Babak is a Professor and the founding director of EcoCloud at EPFL. His contributions to computer systems include the first NUMA multiprocessors built by Sun Microsystems (WildFire/WildCat), memory streaming integrated in IBM BlueGene (temporal) and ARM cores (spatial), and performance evaluation methodologies in use by AMD, HP and Google PerfKit. He has shown that memory consistency models are neither necessary nor sufficient to achieve high performance in servers. These results led to fence speculation in modern CPUs. His work on workload-optimized server processors laid the foundation for the first generation of Cavium ARM server CPUs, ThunderX. He is a recipient of an Alfred P. Sloan Research Fellowship, and a fellow of ACM and IEEE.

10:00 - 10:20
10:20 - 11:20
Session #7: Program Analysis and OptimizationMain Conference
Chair(s): Fabrice Rastello Inria, France
10:20
15m
Talk
Loop Rolling for Code Size ReductionArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Rodrigo C. O. Rocha University of Edinburgh, UK, Pavlos Petoumenos University of Manchester, Björn Franke University of Edinburgh, UK, Pramod Bhatotia University of Edinburgh, Michael F. P. O'Boyle University of Edinburgh
Link to publication
10:35
15m
Talk
Solving PBQP-based Register Allocation using Deep Reinforcement Learning
Main Conference
Minsu Kim Seoul National University, Jeong-Keun Park Seoul National University, Soo-Mook Moon Seoul National University
Link to publication
10:50
15m
Talk
F3M: Fast Focused Function MergingArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Sean Stirling Codeplay, Rodrigo C. O. Rocha University of Edinburgh, UK, Hugh Leather Facebook, Kim Hazelwood Facebook, Michael F. P. O'Boyle University of Edinburgh, Pavlos Petoumenos University of Manchester, UK
Link to publication
11:05
15m
Talk
Sound, Precise, and Fast Abstract Interpretation with Tristate NumbersArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Link to publication
11:20 - 11:35
AwardsMain Conference
Chair(s): Fabrice Rastello Inria, France, Sebastian Hack Saarland University, Germany, Tatiana Shpeisman Google
11:35 - 12:00
12:00 - 13:00
Session #8: IR, Encryption and CompressionMain Conference
Chair(s): Michel Steuwer University of Edinburgh
12:00
15m
Talk
Lambda the Ultimate SSA: Optimizing Functional Programs in SSAArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Functional v1.1
Main Conference
Siddharth Bhat IIT Hyderabad, Tobias Grosser University of Edinburgh, Anurudh Peduri IIIT Hyderabad
Link to publication
12:15
15m
Talk
NOELLE Offers Empowering LLVM ExtensionsArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Functional v1.1
Main Conference
Angelo Matni Northwestern University, Enrico Armenio Deiana Northwestern University, Yian Su Northwestern University, Lukas Gross Northwestern University, Souradip Ghosh Northwestern University, Sotiris Apostolakis Northwestern University, Ziyang Xu Princeton University, Zujun Tan Princeton University, Ishita Chaturvedi Princeton University, Brian Homerding Northwestern University, Tommy McMichen Northwestern University, David I. August Princeton University, Simone Campanoni Northwestern University
Link to publication
12:30
15m
Talk
HECATE: Performance-aware Scale Optimization for Homomorphic Encryption Compiler
Main Conference
Yongwoo Lee Yonsei University, Seonyoung Heo ETH Zurich, Seonyoung Cheon Yonsei University, Changsu Kim Seoul National University, Eunkyung Kim Samsung SDS, Dongyoon Lee Stony Brook University, Hanjun Kim Yonsei University
Link to publication
12:45
15m
Talk
Unified Compilation for Lossless Compression and Sparse ComputingResults Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Daniel Donenfeld Massachusetts Institute of Technology, Stephen Chou Massachusetts Institute of Technology, Saman Amarasinghe Massachusetts Institute of Technology
Link to publication
13:00 - 13:10
Closing RemarksMain Conference
Chair(s): Jae W. Lee Seoul National University, Korea

Not scheduled yet

Not scheduled yet
Talk
Closing Remarks
Main Conference

Accepted Papers

Title
A Compiler for Sound Floating-Point Computations using Affine ArithmeticArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Link to publication
A Compiler Framework for Optimizing Dynamic Parallelism on GPUsArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Link to publication
Aggregate Update Problem for Multi-Clocked Dataflow LanguagesArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Link to publication
Automatic Generation of Debug Headers through BlackBox Equivalence CheckingArtifact Available v1.1Artifacts Evaluated – Functional v1.1
Main Conference
Link to publication
Automatic Horizontal Fusion for GPU KernelsArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Link to publication
CompilerGym: Robust, Performant Compiler Optimization Environments for AI ResearchArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Link to publication
Comprehensive Accelerator-Dataflow Co-Design Optimization for Convolutional Neural Networks
Main Conference
Link to publication
DARM: Control-Flow Melding for SIMT Thread Divergence ReductionArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Link to publication
Distill: Domain-Specific Compilation for Cognitive ModelsArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Link to publication
Efficient Execution of OpenMP on GPUsArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Functional v1.1
Main Conference
Link to publication
Enabling Near Real-Time NLU-Driven Natural Language Programming through Dynamic Grammar Graph-Based Translation
Main Conference
Link to publication
F3M: Fast Focused Function MergingArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Link to publication
Gadgets Splicing: Dynamic Binary Transformation for Precise RewritingArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Functional v1.1
Main Conference
Link to publication
GraphIt to CUDA compiler in 2021 LOC: A case for high-performance DSL implementation via staging with BuilDSLArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Link to publication
HECATE: Performance-aware Scale Optimization for Homomorphic Encryption Compiler
Main Conference
Link to publication
Lambda the Ultimate SSA: Optimizing Functional Programs in SSAArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Functional v1.1
Main Conference
Link to publication
Loop Rolling for Code Size ReductionArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Link to publication
M3V: Multi-Modal Multi-View Context Embedding for Repair Operator Prediction
Main Conference
Link to publication
NOELLE Offers Empowering LLVM ExtensionsArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Functional v1.1
Main Conference
Link to publication
Optimizing GPU Deep Learning Operators With Polyhedral Scheduling Constraint Injection
Main Conference
Link to publication
PALMED: Throughput Characterization for Superscalar ArchitecturesArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Link to publication
Recovering Container Class Types in C++ BinariesArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Link to publication
Solving PBQP-based Register Allocation using Deep Reinforcement Learning
Main Conference
Link to publication
Sound, Precise, and Fast Abstract Interpretation with Tristate NumbersArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Link to publication
SPNC: An Open-Source MLIR-based Compiler for Fast Sum-Product Network Inference on CPUs and GPUs
Main Conference
Link to publication
SRTuner: Effective Compiler Optimization Customization By Exposing Synergistic RelationsArtifact Available v1.1Results Reproduced v1.1Artifacts Evaluated – Functional v1.1
Main Conference
Link to publication
Unified Compilation for Lossless Compression and Sparse ComputingResults Reproduced v1.1Artifacts Evaluated – Reusable v1.1
Main Conference
Link to publication

Call for Papers

The International Symposium on Code Generation and Optimization (CGO) is a premier venue to bring together researchers and practitioners working at the interface of hardware and software on a wide range of optimization and code generation techniques and related issues. The conference spans the spectrum from purely static to fully dynamic approaches, and from pure software-based methods to specific architectural features and support for code generation and optimization.

Original contributions are solicited on, but not limited to, the following topics:

  • Code Generation, Translation, Transformation, and Optimization for performance, energy, virtualization, portability, security, or reliability concerns, and architectural support
  • Efficient execution of dynamically typed and higher-level languages
  • Optimization and code generation for emerging programming models, platforms, domain-specific languages
  • Dynamic/static, profile-guided, feedback-directed, and machine learning based optimization
  • Static, Dynamic, and Hybrid Analysis for performance, energy, memory locality, throughput or latency, security, reliability, or functional debugging
  • Program characterization methods
  • Efficient profiling and instrumentation techniques; architectural support
  • Novel and efficient tools
  • Compiler design, practice and experience
  • Compiler abstraction and intermediate representations
  • Vertical integration of language features, representations, optimizations, and runtime support for parallelism
  • Solutions that involve cross-layer (HW/OS/VM/SW) design and integration
  • Deployed dynamic/static compiler and runtime systems for general purpose, embedded system and Cloud/HPC platforms
  • Parallelism, heterogeneity, and reconfigurable architectures
  • Optimizations for heterogeneous or specialized targets, GPUs, SoCs, CGRA
  • Compiler support for vectorization, thread extraction, task scheduling, speculation, transaction, memory management, data distribution and synchronization

Call for Tools and Practical Experience Papers

Last two years CGO had a special category of papers called “Tools and Practical Experience,” which was very successful. CGO this year will have the same category of papers. Such a paper is subject to the same page length guidelines, except that it must give a clear account of its functionality and a summary about the practice experience with realistic case studies, and describe all the supporting artifacts available.

For papers submitted in this category that present a tool, it is mandatory to submit an artifact to the Artifact Evaluation process and to be successfully evaluated. These papers will initially be conditionally accepted based on the condition that an artifact is submitted to the Artifact Evaluation process and that this artifact is successfully evaluated. Authors are not required to make their tool publicly available, but we do require that an artifact is submitted and successfully evaluated.

Papers submitted in this category presenting practical experience are encouraged but not required to submit an artifact to the Artifact Evaluation process.

The selection criteria for papers in this category are:

  • Originality: Papers should present CGO-related technologies applied to real-world problems with scope or characteristics that set them apart from previous solutions.
  • Usability: The presented Tools or compilers should have broad usage or applicability. They are expected to assist in CGO-related research, or could be extended to investigate or demonstrate new technologies. If significant components are not yet implemented, the paper will not be considered.
  • Documentation: The tool or compiler should be presented on a web-site giving documentation and further information about the tool.
  • Benchmark Repository: A suite of benchmarks for testing should be provided.
  • Availability: Preferences will be given to tools or compilers that are freely available (at either the source or binary level). Exceptions may be made for industry and commercial tools that cannot be made publicly available for business reasons.
  • Foundations: Papers should incorporate the principles underpinning Code Generation and Optimization (CGO). However, a thorough discussion of theoretical foundations is not required; a summary of such should suffice.
  • Artifact Evaluation: The submitted artifact must be functional and supports the claims made in the paper. Submission of an artifact is mandatory for papers presenting a tool.

Artifact Evaluation

The Artifact Evaluation process is run by a separate committee whose task is to assess how the artifacts support the work described in the papers. This process contributes to improve reproducibility in research that should be a great concern to all of us. There is also some evidence that papers with a supporting artifact receive higher citations than papers without (Artifact Evaluation: Is It a Real Incentive? by B. Childers and P. Chrysanthis).

Authors of accepted papers at CGO have the option of submitting their artifacts for evaluation within two weeks of paper acceptance. To ease the organization of the AE committee, we kindly ask authors to indicate at the time they submit the paper, whether they are interested in submitting an artifact. Papers that go through the Artifact Evaluation process successfully will receive a seal of approval printed on the papers themselves. Additional information is available on the CGO AE web page. Authors of accepted papers are encouraged, but not required, to make these materials publicly available upon publication of the proceedings, by including them as “source materials” in the ACM Digital Library.


Authors should carefully consider the difference in focus with the co-located conferences when deciding where to submit a paper. CGO will make the proceedings freely available via the ACM DL platform during the period from two weeks before to two weeks after the conference. This option will facilitate easy access to the proceedings by conference attendees, and it will also enable the community at large to experience the excitement of learning about the latest developments being presented in the period surrounding the event itself.

Submission Site

Papers can be submitted at https://cgo22.hotcrp.com.

Submission Guidelines

Please make sure that your paper satisfies ALL of the following requirements before it is submitted:

  • The paper must be original material that has not been previously published in another conference or journal, nor is currently under review by another conference or journal. Note that you may submit material presented previously at a workshop without copyrighted proceedings.

  • Your submission is limited to ten (10) letter-size (8.5″x11″), single-spaced, double-column pages, using 10pt or larger font, not including references. There is no page limit for references. We highly recommend the IEEE templates for conference proceedings because this format will be used in the proceedings. Overleaf users should use this template. The ACM SIGPLAN templates may also be used for reviews, and in that case, please use the following options: \documentclass[sigplan,10pt,review,anonymous]{acmart}\settopmatter{printfolios=true,printccs=false,printacmref=false}. Submissions not adhering to these submission guidelines may be outright rejected at the discretion of the program chairs. (Please make sure your paper prints satisfactorily on letter-size (8.5″x11″) paper: this is especially important for submissions from countries where A4 paper is standard.)

  • Papers are to be submitted for double-blind review. Blind reviewing of papers will be done by the program committee, assisted by outside referees. Author names as well as hints of identity are to be removed from the submitted paper. Use care in naming your files. Source file names, e.g., Joe.Smith.dvi, are often embedded in the final output as readily accessible comments. In addition, do not omit references to provide anonymity, as this leaves the reviewer unable to grasp the context. Instead, if you are extending your own work, you need to reference and discuss the past work in third person, as if you were extending someone else’s research. We realize in doing this that for some papers it will still be obvious who the authors are. In this case, the submission will not be penalized as long a concerted effort was made to reference and describe the relationship to the prior work as if you were extending someone else’s research. For example, if your name is Joe Smith:

    In previous work [1,2], Smith presented a new branch predictor for …. In this paper, we extend their work by …

    Bibliography

    [1] Joe Smith, “A Simple Branch Predictor for …,” Proceedings of CGO 2019.

    [2] Joe Smith, “A More Complicated Branch Predictor for…,” Proceedings of CGO 2019.

  • Your submission must be formatted for black-and-white printers and not color printers. This is especially true for plots and graphs in the paper.
  • Please make sure that the labels on your graphs are readable without the aid of a magnifying glass. Typically the default font sizes on the graph axes in a program like Microsoft Excel are too small.
  • Please number the pages.
  • The paper must be submitted in PDF. We cannot accept any other format, and we must be able to print the document just as we receive it. We strongly suggest that you use only the four widely-used printer fonts: Times, Helvetica, Courier and Symbol.
  • Please make sure that the output has been formatted for printing on LETTER size paper. If generating the paper using “dvips”, use the option “-P cmz -t letter”, and if that is not supported, use “-t letter”.
  • The Artifact Evaluation process is run by a separate committee whose task is to assess how the artifacts support the work described in the papers. Authors of accepted papers have the option of submitting their artifacts for evaluation within one week of paper acceptance. To ease the organization of the AE committee, we kindly ask authors to indicate at the time they submit the paper, whether they are interested in submitting an artifact. Papers that go through the Artifact Evaluation process successfully will receive a seal of approval printed on the papers themselves. Additional information is available on the CGO AE web page. Authors of accepted papers are encouraged, but not required, to make these materials publicly available upon publication of the proceedings, by including them as “source materials” in the ACM Digital Library.
  • Authors must register all their conflicts on the paper submission site. Conflicts are needed to ensure appropriate assignment of reviewers. If a paper is found to have an undeclared conflict that causes a problem OR if a paper is found to declare false conflicts in order to abuse or “game” the review system, the paper may be rejected.

  • Please declare a conflict of interest with the following people for any author of your paper:

    • Your Ph.D. advisor(s), post-doctoral advisor(s), Ph.D. students, and post-doctoral advisees, forever.
    • Family relations by blood or marriage, or their equivalent, forever (if they might be potential reviewers).
    • People with whom you have collaborated in the last FIVE years, including:
    • Co-authors of accepted/rejected/pending papers.
    • Co-PIs on accepted/rejected/pending grant proposals.
    • Funders (decision-makers) of your research grants, and researchers whom you fund.
    • People (including students) who shared your primary institution(s) in the last FIVE years.
    • Other relationships, such as close personal friendship, that you think might tend to affect your judgment or be seen as doing so by a reasonable person familiar with the relationship.
    • “Service” collaborations such as co-authoring a report for a professional organization, serving on a program committee, or co-presenting tutorials, do not themselves create a conflict of interest. Co-authoring a paper that is a compendium of various projects with no true collaboration among the projects does not constitute a conflict among the authors of the different projects.
    • On the other hand, there may be others not covered by the above with whom you believe a COI exists, for example, an ongoing collaboration that has not yet resulted in the creation of a paper or proposal. Please report such COIs; however, you may be asked to justify them. Please be reasonable. For example, you cannot declare a COI with a reviewer just because that reviewer works on topics similar to or related to those in your paper. The PC Chair may contact co-authors to explain a COI whose origin is unclear.
    • We hope to draw most reviewers from the PC and the ERC, but others from the community may also write reviews. Please declare all your conflicts (not just restricted to the PC and ERC). When in doubt, contact the program co-chairs.
Questions? Use the CGO Main Conference contact form.