PPoPP 2024
Sat 2 - Wed 6 March 2024 Edinburgh, United Kingdom

PPoPP is the premier forum for leading work on all aspects of parallel programming, including theoretical foundations, techniques, languages, compilers, runtime systems, tools, and practical experience. In the context of the symposium, “parallel programming” encompasses work on concurrent and parallel systems (multicore, multi-threaded, heterogeneous, clustered, and distributed systems; grids; datacenters; clouds; and large scale machines). Given the rise of parallel architectures in the consumer market (desktops, laptops, and mobile devices) and data centers, PPoPP is particularly interested in work that addresses new parallel workloads and issues that arise out of extreme-scale applications or cloud platforms, as well as techniques and tools that improve the productivity of parallel programming or work towards improved synergy with such emerging architectures.

Proceedings will be available in the ACM Digital Library.

Dates
Plenary
You're viewing the program in a time zone which is different from your device's time zone change time zone

Sun 3 Mar

Displayed time zone: London change

18:00 - 20:00
Reception and Poster SessionMain Conference at Strathblane Hall
18:00
2h
Poster
POSTER - H3: A Hash-table Based and Holistically Optimized High-Performance Sparse Tensor Contraction
Main Conference
Guofeng Feng Institute of Computing Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences, Weile Jia Institute of Computing Technology, Chinese Academy of Sciences, Ninghui Sun State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Guangming Tan Chinese Academy of Sciences(CAS), Jiajia Li North Carolina State University
18:00
2h
Poster
POSTER - P2Res: Pattern-Aware Sparse Communication for Scalable Recommendation Model Training
Main Conference
Jiaao He Tsinghua University, China, Shengqi Chen Tsinghua University, Jidong Zhai Tsinghua University
18:00
2h
Poster
POSTER - gZCCL: Compression-Accelerated Collective Communication Framework for GPU Clusters
Main Conference
Jiajun Huang University of California, Riverside, Sheng Di Argonne National Laboratory, Xiaodong Yu Stevens Institute of Technology, Yujia Zhai University of California Riverside, Jinyang Liu University of California, Riverside, Yafan Huang The University of Iowa, Ken Raffenetti Argonne National Laboratory, Hui Zhou Argonne National Laboratory, Kai Zhao Florida State University, zizhong chen University of California, Riverside, Franck Cappello Argonne National Laboratory, Yanfei Guo Argonne National Laboratory, Rajeev Thakur Argonne National Laboratory
18:00
2h
Poster
POSTER - RadiK: Scalable Radix Top-K Selection on GPUs
Main Conference
Yifei Li Alibaba Group, Bole Zhou Independent, Jiejing Zhang Alibaba Group, Xuechao Wei Alibaba Group, Yinghan Li Alibaba Group, Yingda Chen Alibaba Group
18:00
2h
Poster
POSTER - Accelerating High-Precision Integer Multiplication used in Cryptosystems with GPUs
Main Conference
Zhuoran Ji Shandong University, Zhaorui Zhang The Hong Kong Polytechnic University, Jiming Xu Ant Group, Lei Ju Shandong University
18:00
2h
Poster
POSTER - Enabling Extreme-Scale Phase Field Simulation with In-situ Feature Extraction
Main Conference
Zhichen Feng Computer Network Information Center, Chinese Academy of Sciences; University of Chinese Academy of Sciences, Jialin Li Computer Network Information Center, Chinese Academy of Sciences; University of Chinese Academy of Sciences, Yaqian Gao Computer Network Information Center, Chinese Academy of Sciences; University of Chinese Academy of Sciences, TianShaobo Computer Network Information Center, Chinese Academy of Sciences; University of Chinese Academy of Sciences, Huang Ye Computer Network Information Center, Chinese Academy of Sciences, Jian Zhang Computer Network Information Center, Chinese Academy of Sciences
18:00
2h
Poster
POSTER - ParGNN: Efficient Training for Large-Scale Graph Neural Network on GPU Clusters
Main Conference
lishunde Computer Network Information Center, Chinese Academy of Sciences;University of Chinese Academy of Sciences, Junyu Gu Computer Network Information Center, Chinese Academy of Sciences;University of Chinese Academy of Sciences, Jue Wang Computer Network Information Center, Chinese Academy of Sciences, Tiechui Yao Computer Network Information Center, Chinese Academy of Sciences, ZhiQiang Liang Computer Network Information Center, Chinese Academy of Sciences, Yumeng Shi Computer Network Information Center, Chinese Academy of Sciences, Shigang Li Beijing University of Posts and Telecommunications, Weiting Xi North China Electric Power University, Shushen Li North China Electric Power University, Chunbao Zhou Computer Network Information Center, Chinese Academy of Sciences, Yangang Wang Computer Network Information Center, Chinese Academy of Sciences, Xuebin Chi Computer Network Information Center, Chinese Academy of Sciences;University of Chinese Academy of Sciences
18:00
2h
Poster
POSTER - RELAX: Durable Data Structures with Swift Recovery
Main Conference
Almog Zur Technion, Nachshon Cohen Amazon, Michal Friedman ETH Zurich, Switzerland, Erez Petrank Technion
18:00
2h
Poster
POSTER - FineCo: Fine-grained Heterogeneous Resource Management for Concurrent DNN Inferences
Main Conference
Lixian Ma State Key Lab of Processors, Institute of Computing Technology, CAS, Beijing, Haoruo Chen State Key Lab of Processors, Institute of Computing Technology, CAS, Beijing, En Shao State Key Lab of Processors, Institute of Computing Technology, CAS, Beijing, Leping Wang State Key Lab of Processors, Institute of Computing Technology, CAS, Beijing, Quan Chen Shanghai Jiao Tong University, Guangming Tan Chinese Academy of Sciences(CAS)
18:00
2h
Poster
POSTER - LLM-PQ:Serving LLM on Heterogeneous Clusters with Phase-Aware Partition and Adaptive Quantization
Main Conference
Juntao Zhao The University of Hong Kong, Borui Wan The University of Hong Kong, Chuan Wu The University of Hong Kong, Yanghua Peng ByteDance Inc., Haibin Lin ByteDance Inc.
18:00
2h
Poster
POSTER - StructMG: A Fast and Scalable Structured Multigrid
Main Conference
Yi Zong Tsinghua University, Xinliang Wang Huawei Technologies Co., Ltd, Haopeng Huang Tsinghua University, Chensong Zhang Academy of Mathematics and Systems Science, Xiaowen Xu Institute of Applied Physics and Computational Mathematics, Jian Sun CMA Earth System Modeling and Prediction Center, Bowen Yan Tsinghua University, Qin Wang Huawei Technologies Co., Ltd, Sicong Li Huawei Technologies Co., Ltd, Zhaohui Ding Huawei Technologies Co., Ltd, Wei Xue Tsinghua University
18:00
2h
Poster
POSTER - OCToPus: Semantic-aware Concurrency Control for Blockchain Transactions
Main Conference
dePaul Miller Lehigh University, Henry F. Korth Lehigh University, Roberto Palmieri Lehigh University

Mon 4 Mar

Displayed time zone: London change

07:45 - 08:15
Registration and Arrival CoffeeCatering at Strathblane Hall
07:45
30m
Coffee break
Arrival Coffee
Catering

08:15 - 08:30
OpeningMain Conference at Pentland
Chair(s): Tobias Grosser University of Edinburgh, Boris Grot University of Edinburgh, UK, Michel Steuwer TU Berlin; University of Edinburgh
08:15
15m
Day opening
Opening
Main Conference

09:30 - 10:00
Coffee breakCatering at Strathblane Hall
09:30
30m
Coffee break
Coffee Break
Catering

10:00 - 11:00
Synchronization and Concurrency Control 1Main Conference at Moorfoot
Chair(s): Michael Scott University of Rochester
10:00
20m
Talk
Scaling Up Transactions with Slower Clocks
Main Conference
Pedro Ramalhete Cisco Systems, Andreia Correia University of Neuchâtel
Link to publication DOI
10:20
20m
Talk
Locks as a Resource: Fairly Scheduling Lock Occupation with CFL
Main Conference
Jonggyu Park University of Washington, Young Ik Eom Dept. of Electrical and Computer Engineering / College of Computing and Informatics, Sungkyunkwan University
Link to publication DOI
10:40
20m
Talk
Are Your Epochs Too Epic? Batch Free Can Be Harmful
Main Conference
Daewoo Kim University of Waterloo, Trevor Brown University of Waterloo, Ajay Singh University of Waterloo
Link to publication DOI
11:00 - 11:30
Coffee breakCatering at Strathblane Hall
11:00
30m
Coffee break
Coffee Break
Catering

11:30 - 12:50
Compilers and Runtimes for Parallel SystemsMain Conference at Moorfoot
Chair(s): Mohamed Riyadh Baghdadi
11:30
20m
Talk
Liger: Interleaving Intra- and Inter-Operator Parallelism for Distributed Large Model Inference
Main Conference
Jiangsu Du Sun Yat-sen University, jinhui wei Sun Yat-sen University, Jiazhi Jiang Sun Yat-sen University, Shenggan Cheng National University of Singapore, Zhiguang Chen Sun Yat-sen University, Dan Huang , Yutong Lu Sun Yat-sen University
Link to publication DOI
11:50
20m
Talk
A Holistic Approach to Automatic Mixed-Precision Code Generation and Tuning for Affine Programs
Main Conference
Jinchen Xu Information Engineering University, Guanghui Song Li Auto Inc., Bei Zhou Information Engineering University, Fei Li Information Engineering University, Jiangwei Hao Information Engineering University, Jie Zhao State Key Laboratory of Mathematical Engineering and Advanced Computing
Link to publication DOI
12:10
20m
Talk
Language-Agnostic Static Deadlock Detection for Futures
Main Conference
Stefan K. Muller Illinois Institute of Technology
Link to publication DOI
12:30
20m
Talk
Recurrence Analysis for Automatic Parallelization of Subscripted Subscripts
Main Conference
Akshay Bhosale University of Delaware, USA, Rudolf Eigenmann University of Delaware
Link to publication DOI
12:50 - 14:20
12:50
90m
Lunch
Lunch
Catering

14:20 - 15:40
High Performance ComputingMain Conference at Moorfoot
Chair(s): Helen Xu Lawrence Berkeley National Laboratory
14:20
20m
Talk
OsirisBFT: Say No to Task Replication for Scalable Byzantine Fault Tolerant Analytics
Main Conference
Kasra Jamshidi Simon Fraser University, Keval Vora Simon Fraser University
Link to publication DOI
14:40
20m
Talk
Towards Scalable Unstructured Mesh Computations on Shared Memory Many-Cores
Main Conference
Haozhong Qiu , xuchuanfu National University of Defense Technology, Jianbin Fang National University of Defense Technology, Liang Deng China Aerodynamic Research and Development Center, Jian Zhang China Aerodynamic Research and Development Center, Qingsong Wang National University of Defense Technology, Yue Ding NOT_PROVIDED, Zhe Dai China Aerodynamic Research and Development Center, Yonggang Che National University of Defense Technology
Link to publication DOI
15:00
20m
Talk
Extreme-scale Direct Numerical Simulation of Incompressible Turbulence on the Heterogeneous Many-core System
Main Conference
Jiabin Xie Sun Yat-sen University, Guangnan Feng Sun Yat-sen University, Han Huang Sun Yat-sen University, Junxuan Feng Sun Yat-sen University, Yutong Lu Sun Yat-sen University
Link to publication DOI
15:20
20m
Talk
Pure: Evolving Message Passing To Better Leverage Shared Memory Within Nodes
Main Conference
James Psota Massachusetts Institute of Technology, Armando Solar-Lezama Massachusetts Institute of Technology
Link to publication DOI
15:40 - 16:10
Coffee breakCatering at Strathblane Hall
15:40
30m
Coffee break
Coffee Break
Catering

16:10 - 17:10
Graph ProcessingMain Conference at Moorfoot
Chair(s): Xipeng Shen North Carolina State University
16:10
20m
Talk
INFINEL: An efficient GPU-based processing method for unpredictable large output graph queries
Main Conference
Sungwoo Park Korea Advanced Institute of Science and Technology, Seyeon Oh GraphAI, Min-Soo Kim Korea Advanced Institute of Science and Technology
Link to publication DOI
16:30
20m
Talk
GraphCube: Interconnection Hierarchy-aware Graph Processing
Main Conference
Xinbiao Gan National University of Defense Technology, Guang Wu National University of Defense Technology, Shenghao Qiu , Feng Xiong National University of Defense Technology, Jiaqi Si National University of Defense Technology, Jianbin Fang National University of Defense Technology, Dezun Dong National University of Defense Technology, Chunye Gong National University of Defense Technology & National Supercomputer Center in Tianjin, Tiejun Li National University of Defense Technology, Zheng Wang
Link to publication DOI
16:50
20m
Talk
Exploiting Fine-Grained Redundancy in Set-Centric Graph Pattern Mining
Main Conference
linzhiheng Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Ke Meng Alibaba, Chaoyang Shui Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Kewei Zhang Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Junmin Xiao Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Guangming Tan Chinese Academy of Sciences(CAS)
Link to publication DOI
17:10 - 17:30
PPoPP Awards SessionMain Conference at Moorfoot
Chair(s): Milind Chabbi Uber Technologies, I-Ting Angelina Lee Washington University in St. Louis, USA
17:10
20m
Awards
PPoPP Awards Session
Main Conference

18:00 - 19:00
Business MeetingMain Conference at Moorfoot
Chair(s): Michel Steuwer TU Berlin; University of Edinburgh
18:00
60m
Meeting
Business Meeting
Main Conference

Tue 5 Mar

Displayed time zone: London change

09:30 - 10:00
Coffee breakCatering at Strathblane Hall
09:30
30m
Coffee break
Coffee Break
Catering

10:00 - 11:00
Synchronization and Concurrency Control 2Main Conference at Moorfoot
Chair(s): Erez Petrank Technion
10:00
20m
Talk
Memory Bounds for Bounded Queues
Main Conference
Nikita Koval JetBrains, Anton Paramonov EPFL, Petr Kuznetsov Telecom Paris, Institut Polytechnique Paris, Vitaly Aksenov City, University of London
Link to publication DOI
10:20
20m
Talk
VERLIB: Concurrent Versioned Pointers
Main Conference
Guy E. Blelloch Carnegie Mellon University, USA, Yuanhao Wei Carnegie Mellon University, USA
Link to publication DOI
10:40
20m
Talk
Practical Hardware Transactional vEB Trees
Main Conference
Mohammad Khalaji University of Waterloo, Trevor Brown University of Waterloo, Khuzaima Daudjee University of Waterloo, Vitaly Aksenov City, University of London
Link to publication DOI
11:00 - 11:30
Coffee breakCatering at Strathblane Hall
11:00
30m
Coffee break
Coffee Break
Catering

11:30 - 12:30
ML WorkloadsMain Conference at Moorfoot
Chair(s): Xipeng Shen North Carolina State University
11:30
20m
Talk
Tetris: Accelerating Sparse Convolution by Exploiting Memory Reuse on GPU
Main Conference
xiaoyanliu Beihang University, Xuegui Zheng Beihang University, Hailong Yang Beihang University, China, Zhongzhi Luan Beihang University, Depei Qian Beihang University, China
Link to publication DOI
11:50
20m
Talk
Shared Memory-contention-aware Concurrent DNN Execution for Diversely Heterogeneous System-on-Chips
Main Conference
Ismet Dagli Colorado School of Mines, Mehmet Belviranli Colorado School of Mines
Link to publication DOI
12:10
20m
Talk
Training one DeePMD Model in Minutes: a Step Towards Online Learning
Main Conference
Siyu Hu Institute of Computing Technology, Chinese Academy of Sciences, Tong Zhao Institute of Computing Technology, Chinese Academy of Sciences, Qiuchen Sha Institute of Computing Technology, Chinese Academy of Sciences, Enji Li Institute of Computing Technology, Chinese Academy of Sciences, Xiangyu Meng College of Computer Science and Technology, Qingdao Institute of Software, China University of Petroleum, Liping Liu Institute of Semiconductors, Chinese Academy of Sciences, Lin-Wang Wang Institute of Semiconductors, Chinese Academy of Sciences, Guangming Tan Chinese Academy of Sciences(CAS), Weile Jia Institute of Computing Technology, Chinese Academy of Sciences
Link to publication DOI
12:50 - 14:20
12:50
90m
Lunch
Lunch
Catering

14:20 - 15:40
Parallel AlgorithmsMain Conference at Moorfoot
Chair(s): Prashant Pandey University of Utah
14:20
20m
Talk
ParANN: Scalable and Deterministic Parallel Graph-Based Approximate Nearest Neighbor Search Algorithms
Main Conference
Magdalen Dobson Carnegie Mellon University, Zheqi Shen University of California, Riverside, Guy E. Blelloch Carnegie Mellon University, USA, Laxman Dhulipala University of Maryland, College Park, Yan Gu University of California, Riverside, Harsha Vardhan Simhadri Microsoft Research Lab India, Yihan Sun University of California, Riverside
Link to publication DOI
14:40
20m
Talk
Parallel k-Core Decomposition with Batched Updates and Asynchronous Reads
Main Conference
Quanquan C. Liu Simons Institute at UC Berkeley, Julian Shun MIT, Igor Zablotchi Mysten Labs
Link to publication DOI
15:00
20m
Talk
Parallel Integer Sort: Theory and Practice
Main Conference
Xiaojun Dong University of California, Riverside, Laxman Dhulipala University of Maryland, College Park, Yan Gu University of California, Riverside, Yihan Sun University of California, Riverside
Link to publication DOI
15:20
20m
Talk
Fast American Option Pricing using Nonlinear Stencils
Main Conference
Zafar Ahmad Stony Brook University, NY, USA, Reilly Browne Stony Brook University, Rezaul Chowdhury Stony Brook University, Rathish Das University of Houston, Yushen Huang Stony Brook University, Yimin Zhu Stony Brook University
Link to publication DOI
15:40 - 16:10
Coffee breakCatering at Strathblane Hall
15:40
30m
Coffee break
Coffee Break
Catering

16:10 - 17:10
Optimizing for MemoryMain Conference at Moorfoot
Chair(s): Yan Gu University of California, Riverside
16:10
20m
Talk
ConvStencil: Transform Stencil Computation to Matrix Multiplication on Tensor CoresBest Paper Award
Main Conference
Yuetao Chen Microsoft Research, Kun Li Microsoft Research, Yuhao Wang Microsoft Research, Donglin Bai Microsoft Research, Lei Wang Microsoft Research, Lingxiao Ma Microsoft Research, Liang Yuan Chinese Academy of Sciences, Yunquan Zhang Zhang, Ting Cao Microsoft Research, Mao Yang Microsoft Research
Link to publication DOI
16:30
20m
Talk
CPMA: An Efficient Batch-Parallel Compressed Set Without Pointers
Main Conference
Brian Wheatman Johns Hopkins University, Randal Burns Johns Hopkins, Aydin Buluc University of California at Berkeley & Lawrence Berkeley National Lab, Helen Xu Lawrence Berkeley National Laboratory
Link to publication DOI
16:50
20m
Talk
Gallatin: A General-Purpose GPU Memory Manager
Main Conference
Hunter James McCoy University of Utah, Prashant Pandey University of Utah
Link to publication DOI

Wed 6 Mar

Displayed time zone: London change

09:30 - 10:00
Coffee BreakCatering at Strathblane Hall
09:30
30m
Coffee break
Coffee Break
Catering

10:00 - 11:00
Linear AlgebraMain Conference at Moorfoot
Chair(s): I-Ting Angelina Lee Washington University in St. Louis, USA
10:00
20m
Talk
A Row Decomposition-based Approach for Sparse Matrix Multiplication on GPUs
Main Conference
Pang Meng Department of Computer Science and Technology, Tsinghua University, Xiang Fei Department of Computer Science and Technology, Tsinghua University, Peng Qu Department of Computer Science and Technology, Tsinghua University, Youhui Zhang Department of Computer Science and Technology, Tsinghua University, Zhaolin Li Department of Computer Science and Technology, Tsinghua University
Link to publication DOI
10:20
20m
Talk
Fast Kronecker Matrix-Matrix Multiplications on GPUs
Main Conference
Abhinav Jangda Microsoft Research, Mohit Yadav University of Massachusetts Amherst
Link to publication DOI
10:40
20m
Talk
Arrow Matrix Decomposition: A Novel Approach for Communication-Efficient Sparse Matrix Multiplication
Main Conference
Lukas Gianinazzi ETH Zurich, Alexandros Nikolaos Ziogas ETH Zurich, Piotr Luczynski ETH Zurich, Langwen Huang ETH Zurich, Saleh Ashkboosh ETH Zurich, Florian Scheidl ETH Zurich, Armon Carigiet ETH Zurich, Chio Ge ETH Zurich, Nabil Abubaker ETH Zurich, Maciej Besta ETH Zurich, Tal Ben-Nun Lawrence Livermore National Laboratory, Torsten Hoefler ETH Zurich
Link to publication DOI
11:00 - 11:30
Coffee BreakCatering at Strathblane Hall
11:00
30m
Coffee break
Coffee Break
Catering

11:30 - 12:10
ApplicationsMain Conference at Moorfoot
Chair(s): Milind Chabbi Uber Technologies Inc.
11:30
20m
Talk
FastFold: Optimizing AlphaFold Training and Inference on GPU Clusters
Main Conference
Shenggan Cheng National University of Singapore, Xuanlei Zhao HPC-AI Tech, Guangyang Lu HPC-AI Tech, Jiarui Fang HPC-AI Tech, Tian Zheng Xi'an Jiaotong University, Ruidong Wu HeliXon, Xiwen Zhang HeliXon, Jian Peng HeliXon, Yang You National University of Singapore
Link to publication DOI
11:50
20m
Talk
AGAThA: Fast and Efficient GPU Acceleration of Guided Sequence Alignment for Long Read Mapping
Main Conference
Seongyeon Park Seoul National University, Junguk Hong Seoul National University, Jaeyong Song Seoul National University, Hajin Kim Yonsei University, Youngsok Kim Yonsei University, Jinho Lee Seoul National University
Link to publication DOI
12:10 - 12:20
ClosingMain Conference at Moorfoot
Chair(s): Michel Steuwer TU Berlin; University of Edinburgh
12:10
10m
Day closing
Closing
Main Conference

Not scheduled yet

Not scheduled yet
Awards
Awards
Main Conference

Accepted Papers

Title
AGAThA: Fast and Efficient GPU Acceleration of Guided Sequence Alignment for Long Read Mapping
Main Conference
Link to publication DOI
A Holistic Approach to Automatic Mixed-Precision Code Generation and Tuning for Affine Programs
Main Conference
Link to publication DOI
Are Your Epochs Too Epic? Batch Free Can Be Harmful
Main Conference
Link to publication DOI
A Row Decomposition-based Approach for Sparse Matrix Multiplication on GPUs
Main Conference
Link to publication DOI
Arrow Matrix Decomposition: A Novel Approach for Communication-Efficient Sparse Matrix Multiplication
Main Conference
Link to publication DOI
ConvStencil: Transform Stencil Computation to Matrix Multiplication on Tensor CoresBest Paper Award
Main Conference
Link to publication DOI
CPMA: An Efficient Batch-Parallel Compressed Set Without Pointers
Main Conference
Link to publication DOI
Exploiting Fine-Grained Redundancy in Set-Centric Graph Pattern Mining
Main Conference
Link to publication DOI
Extreme-scale Direct Numerical Simulation of Incompressible Turbulence on the Heterogeneous Many-core System
Main Conference
Link to publication DOI
Fast American Option Pricing using Nonlinear Stencils
Main Conference
Link to publication DOI
FastFold: Optimizing AlphaFold Training and Inference on GPU Clusters
Main Conference
Link to publication DOI
Fast Kronecker Matrix-Matrix Multiplications on GPUs
Main Conference
Link to publication DOI
Gallatin: A General-Purpose GPU Memory Manager
Main Conference
Link to publication DOI
GraphCube: Interconnection Hierarchy-aware Graph Processing
Main Conference
Link to publication DOI
INFINEL: An efficient GPU-based processing method for unpredictable large output graph queries
Main Conference
Link to publication DOI
Language-Agnostic Static Deadlock Detection for Futures
Main Conference
Link to publication DOI
Liger: Interleaving Intra- and Inter-Operator Parallelism for Distributed Large Model Inference
Main Conference
Link to publication DOI
Locks as a Resource: Fairly Scheduling Lock Occupation with CFL
Main Conference
Link to publication DOI
Memory Bounds for Bounded Queues
Main Conference
Link to publication DOI
OsirisBFT: Say No to Task Replication for Scalable Byzantine Fault Tolerant Analytics
Main Conference
Link to publication DOI
Parallel Integer Sort: Theory and Practice
Main Conference
Link to publication DOI
Parallel k-Core Decomposition with Batched Updates and Asynchronous Reads
Main Conference
Link to publication DOI
ParANN: Scalable and Deterministic Parallel Graph-Based Approximate Nearest Neighbor Search Algorithms
Main Conference
Link to publication DOI
PPoPP Awards Session
Main Conference

Practical Hardware Transactional vEB Trees
Main Conference
Link to publication DOI
Pure: Evolving Message Passing To Better Leverage Shared Memory Within Nodes
Main Conference
Link to publication DOI
Recurrence Analysis for Automatic Parallelization of Subscripted Subscripts
Main Conference
Link to publication DOI
Scaling Up Transactions with Slower Clocks
Main Conference
Link to publication DOI
Shared Memory-contention-aware Concurrent DNN Execution for Diversely Heterogeneous System-on-Chips
Main Conference
Link to publication DOI
Tetris: Accelerating Sparse Convolution by Exploiting Memory Reuse on GPU
Main Conference
Link to publication DOI
Towards Scalable Unstructured Mesh Computations on Shared Memory Many-Cores
Main Conference
Link to publication DOI
Training one DeePMD Model in Minutes: a Step Towards Online Learning
Main Conference
Link to publication DOI
VERLIB: Concurrent Versioned Pointers
Main Conference
Link to publication DOI

Call for Papers

PPoPP 2024: 29th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming

Location: Edinburgh, United Kingdom. (collocated with CC-2024, HPCA-2024 and CGO-2024) Dates: 02 - 06 March, 2024.

Submission URL: https://ppopp24.hotcrp.com

Important dates:

  • Full paper submission: Friday, August 4, 2023
  • Author response period: Wednesday, October 18 – Friday, October 20, 2023
  • Author notification: Friday, November 10, 2023
  • Final paper due: January 17, 2024

Scope:

PPoPP is the premier forum for leading work on all aspects of parallel programming, including theoretical foundations, techniques, languages, compilers, runtime systems, tools, and practical experience. In the context of the symposium, “parallel programming” encompasses work on concurrent and parallel systems (multicore, multi-threaded, heterogeneous, clustered, and distributed systems; grids; accelerators such as ASICs, GPUs, FPGAs; data centers; clouds; and large scale machines). PPoPP is interested in all aspects related to improving the productivity of parallel programming on modern architectures. PPoPP is also interested in work that addresses new parallel workloads and issues that arise out of large-scale scientific or enterprise workloads.

Specific topics of interest include (but are not limited to):

  • Languages, compilers, and runtimes for parallel systems
  • Concurrent data structures
  • Development, analysis, or management tools
  • Fault tolerance for parallel systems
  • Formal analysis and verification
  • High-performance libraries
  • Middleware for parallel systems
  • Machine learning for parallel systems
  • Parallel algorithms
  • Parallel applications including scientific computing (e.g., simulation and modeling) and enterprise workloads (e.g., web, search, analytics, cloud, and machine learning)
  • Parallel frameworks
  • Parallel programming for deep memory hierarchies including nonvolatile memory
  • Parallel programming theory and models
  • Performance analysis, debugging and optimization
  • Productivity tools for parallel systems
  • Software engineering for parallel programs
  • Synchronization and concurrency control

Papers should report on original research relevant to parallel programming and should contain enough background materials to make them accessible to the entire parallel programming research community. Papers describing experience should indicate how they illustrate general principles or lead to new insights; papers about parallel programming foundations should indicate how they relate to practice. PPoPP submissions will be evaluated based on their technical merit and accessibility. Submissions should clearly motivate the importance of the problem being addressed, compare to the existing body of work on the topic, and explicitly and precisely state the paper’s key contributions and results towards addressing the problem. Submissions should strive to be accessible both to a broad audience and to experts in the area.

Paper Submission:

Conference submission site: https://ppopp24.hotcrp.com

All submissions must be made electronically through the conference web site and include an abstract (100–400 words), author contact information, the full list of authors and their affiliations. Full paper submissions must be in PDF format printable on both A4 and US letter size paper.

All papers must be prepared in ACM Conference Format using the 2-column acmart format: use the SIGPLAN proceedings template acmart-sigplanproc-template.tex for Latex, and interim-layout.docx for Word. You may also want to consult the official ACM information on the Master Article Template and related tools. Important note: The Word template (interim-layout.docx) on the ACM website uses 9pt font; you need to increase it to 10pt.

Papers should contain a maximum of 10 pages of text (in a typeface no smaller than 10 point) or figures, NOT INCLUDING references. There is no page limit for references and they must include the name of all authors (not {et al.}). Appendices are not allowed, but the authors may submit supplementary material, such as proofs or source code; all supplementary material must be in PDF or ZIP format. Looking at supplementary material is at the discretion of the reviewers.

Submission is double blind and authors will need to identify any potential conflicts of interest with PC and Extended Review Committee members, as defined here: http://www.sigplan.org/Resources/Policies/Review/ (ACM SIGPLAN policy).

PPoPP 2024 will employ a double-blind reviewing process. To facilitate this process, submissions should not reveal the identity of the authors in any way. Authors should leave out author names and affiliations from the body of their submission. They should also ensure that any references to authors’ own related work should be in the third person (e.g., not “We build on our previous work …” but rather “We build on the work of …”). The purpose of this process is to help the PC and external reviewers come to an initial judgment about the paper without bias, not to make it impossible for them to discover the authors if they were to try. Nothing should be done in the name of anonymity that weakens the submission or makes the job of reviewing the paper more difficult. In particular, important background references should not be omitted or anonymized. In addition, authors should feel free to disseminate their ideas or draft versions of their paper as they normally would. For instance, authors may post drafts of their papers on the web or give talks on their research ideas. Authors with further questions on double-blind reviewing are encouraged to contact the Program Chairs by email.

To facilitate fair and unbiased reviews for all submissions, PPoPP 2024 may utilize the [Toronto Paper Matching System (TPMS)]{(http://torontopapermatching.org/) to assign papers to reviewers. From the authors’ perspective, this decision means that the submissions may be uploaded to the TPMS.

Submissions should be in PDF and printable on both US Letter and A4 paper. Papers may be resubmitted to the submission site multiple times up until the deadline, but the last version submitted before the deadline will be the version reviewed. Papers that exceed the length requirement, that deviate from the expected format, or that are submitted late will be rejected.

All submissions that are not accepted for regular presentations will be automatically considered for posters. Two-page summaries of accepted posters will be included in the conference proceedings.

To allow reproducibility, we encourage authors of accepted papers to submit their papers for Artifact Evaluation (AE). The AE process begins after the acceptance notification, and is run by a separate committee whose task is to assess how the artifacts support the work described in the papers. Artifact evaluation is voluntary and will not affect paper acceptance, but will be taken into consideration when selecting papers for awards. Papers that go through the AE process successfully will receive one or several of the ACM reproducibility badges, printed on the papers themselves. More information will be posted on the AE website.

Deadlines expire at midnight anywhere on earth.

Publication Date:

The titles of all accepted papers are typically announced shortly after the author notification date (late November 2023). Note, however, that this is not the official publication date. The official publication date is the date the proceedings are made available in the ACM Digital Library. ACM will make the proceedings available via the Digital Library for one month, up to 2 weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work.

ACM Publications Policies:

By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including ACM’s new Publications Policy on Research Involving Human Participants and Subjects. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of your paper, in addition to other potential penalties, as per ACM Publications Policy." https://www.acm.org/publications/policies/research-involving-human-participants-and-subjects

Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process for your accepted paper. We are committed to improve author discoverability, ensure proper attribution and contribute to ongoing community efforts around name normalization; your ORCID ID will help in these efforts. Please follow the https://dl.acm.org/journal/pacmcgit/author-guidelines link to see ACM’s ORCID requirements for authors.