Main ConferencePPoPP 2023
PPoPP is the premier forum for leading work on all aspects of parallel programming, including theoretical foundations, techniques, languages, compilers, runtime systems, tools, and practical experience. In the context of the symposium, “parallel programming” encompasses work on concurrent and parallel systems (multicore, multi-threaded, heterogeneous, clustered, and distributed systems; grids; datacenters; clouds; and large scale machines). Given the rise of parallel architectures in the consumer market (desktops, laptops, and mobile devices) and data centers, PPoPP is particularly interested in work that addresses new parallel workloads and issues that arise out of extreme-scale applications or cloud platforms, as well as techniques and tools that improve the productivity of parallel programming or work towards improved synergy with such emerging architectures.
Proceedings will be available in the ACM Digital Library.
Sun 26 FebDisplayed time zone: Eastern Time (US & Canada) change
18:00 - 20:00 | |||
18:00 2hPoster | POSTER: Stream-K: Work-centric Parallel Decomposition for Dense Matrix-Matrix Multiplication on the GPU Main Conference Muhammad Osama University of California, Davis, Duane Merrill NVIDIA Corporation, Cris Cecka NVIDIA Corporation, Michael Garland NVIDIA, John D. Owens University of California, Davis Pre-print | ||
18:00 2hPoster | POSTER: Unexpected Scaling in Path Copying Trees Main Conference Vitaly Aksenov Inria & ITMO University, Trevor Brown University of Toronto, Alexander Fedorov IST Austria, Ilya Kokorin ITMO University | ||
18:00 2hPoster | POSTER: Transactional Composition of Nonblocking Data Structures Main Conference Wentao Cai University of Rochester, Haosen Wen University of Rochester, Michael L. Scott University of Rochester | ||
18:00 2hPoster | POSTER: The ERA Theorem for Safe Memory Reclamation Main Conference | ||
18:00 2hPoster | POSTER: AArch64 Atomics: Might they be harming your performance? Main Conference | ||
18:00 2hPoster | POSTER: Fast Parallel Exact Inference on Bayesian Networks Main Conference Jiantong Jiang The University of Western Australia, Zeyi Wen The Hong Kong University of Science and Technology (Guangzhou), Atif Mansoor The University of Western Australia, Ajmal Mian The University of Western Australia | ||
18:00 2hPoster | POSTER: High-Throughput GPU Random Walk with Fine-tuned Concurrent Query Processing Main Conference Cheng Xu Shanghai Jiao Tong University, Chao Li Shanghai Jiao Tong University, Pengyu Wang Shanghai Jiao Tong University, Xiaofeng Hou Hong Kong University of Science and Technology, Jing Wang Shanghai Jiao Tong University, Shixuan Sun National University of Singapore, Minyi Guo Shanghai Jiao Tong University, Hanqing Wu Alibaba Inc, Dongbai Chen Alibaba Inc, Xiangwen Liu Alibaba Inc | ||
18:00 2hPoster | POSTER: Efficient All-reduce for Distributed DNN Training in Optical Interconnect Systems Main Conference Fei Dai University of Otago, Yawen Chen University of Otago, Zhiyi Huang University of Otago, Haibo Zhang University of Otago, Fangfang Zhang Qilu University of Technology | ||
18:00 2hPoster | POSTER: CuPBoP: A framework to make CUDA portable Main Conference Ruobing Han Georgia Institute of Technology, Jun Chen Georgia Institute of Technology, Bhanu Garg Georgia Institute of Technology, Jeffrey Young Georgia Institute of Technology, Jaewoong Sim Seoul National University, Hyesoon Kim Georgia Tech | ||
18:00 2hPoster | POSTER: Generating Fast FFT Kernels on CPUs via FFT-Specific Intrinsics Main Conference Zhihao Li SKLP, Institute of Computing Technology, CAS, Haipeng Jia SKLP, Institute of Computing Technology, CAS, Yunquan Zhang SKLP, Institute of Computing Technology, CAS, Yuyan Sun Huawei Technologies Co., Ltd, Yiwei Zhang SKLP, Institute of Computing Technology, CAS, Tun Chen SKLP, Institute of Computing Technology, CAS | ||
18:00 2hPoster | POSTER: Learning to Parallelize in a Shared-Memory Environment with Transformers Main Conference Pre-print |
Mon 27 FebDisplayed time zone: Eastern Time (US & Canada) change
07:45 - 08:10 | |||
07:45 25mCoffee break | Coffee/Tea Catering |
08:10 - 08:30 | |||
10:00 - 12:00 | |||
10:00 20mTalk | Boosting Performance and QoS for Concurrent GPU B+trees by Combining-based Synchronization Main Conference Weihua Zhang Fudan University, Chuanlei Zhao Fudan University, Lu Peng Tulane University, Yuzhe Lin Fudan University, Fengzhe Zhang Fudan University, Yunping Lu Fudan University | ||
10:20 20mTalk | The State-of-the-Art LCRQ Concurrent Queue Algorithm Does NOT Require CAS2 Main Conference | ||
10:40 20mTalk | Provably Good Randomized Strategies for Data Placement in Distributed Key-Value Stores Main Conference Zhe Wang Washington University in St. Louis, Jinhao Zhao Washington University in St. Louis, Kunal Agrawal Washington University in St. Louis, USA, Jing Li New Jersey Institute of Technology, He Liu Apple Inc., Meng Xu Apple Inc. | ||
11:00 20mTalk | 2PLSF: Two-Phase Locking with Starvation-Freedom Main Conference Pedro Ramalhete Cisco Systems, Andreia Correia University of Neuchâtel, Pascal Felber University of Neuchâtel | ||
11:20 20mTalk | Provably Fast and Space-Efficient Parallel Biconnectivity Main Conference Xiaojun Dong University of California, Riverside, Letong Wang University of California, Riverside, Yan Gu UC Riverside, Yihan Sun University of California, Riverside | ||
11:40 20mTalk | Practically and Theoretically Efficient Garbage Collection for Multiversioning Main Conference Yuanhao Wei Carnegie Mellon University, USA, Guy E. Blelloch Carnegie Mellon University, USA, Panagiota Fatourou FORTH ICS and University of Crete, Greece, Eric Ruppert York University |
12:00 - 13:50 | |||
12:00 1h50mLunch | Lunch Catering |
13:50 - 15:10 | |||
13:50 20mTalk | A Programming Model for GPU Load Balancing Main Conference Muhammad Osama University of California, Davis, Serban D. Porumbescu University of California, Davis, John D. Owens University of California, Davis | ||
14:10 20mTalk | Exploring the Use of WebAssembly in HPC Main Conference Mohak Chadha Chair of Computer Architecture and Parallel Systems, Technical University of Munich, Nils Krueger Chair of Computer Architecture and Parallel Systems, Technical University of Munich, Jophin John Chair of Computer Architecture and Parallel Systems, Technical University of Munich, Anshul Jindal Chair of Computer Architecture and Parallel Systems, Technical University of Munich, Michael Gerndt TUM, Shajulin Benedict Indian Institute of Information Technology Kottayam, Kerala, India | ||
14:30 20mTalk | Fast and Scalable Channels in Kotlin Coroutines Main Conference | ||
14:50 20mTalk | High-Performance GPU-to-CPU Transpilation and Optimization via High-Level Parallel Constructs Main Conference William S. Moses Massachusetts Institute of Technology, Ivan Radanov Ivanov Tokyo Institute of Technology, Jens Domke RIKEN Center for Computational Science, Toshio Endo Tokyo Institute of Technology, Johannes Doerfert Lawrence Livermore National Laboratory, Oleksandr Zinenko Google |
15:40 - 17:00 | Session 3: PracticeMain Conference at Montreal 4 Chair(s): I-Ting Angelina Lee Washington University in St. Louis, USA | ||
15:40 20mTalk | A Scalable Hybrid Total FETI Method for Massively Parallel FEM Simulations Main Conference Kehao Lin Hangzhou Dianzi University, Chunbao Zhou Computer Network Information Center, Chinese Academy of Sciences, Yan Zeng Hangzhou Dianzi University, Ningming Nie Computer Network Information Center, Chinese Academy of Sciences, Jue Wang Computer Network Information Center, Chinese Academy of Sciences, Shigang Li Beijing University of Posts and Telecommunications, Yangde Feng Computer Network Information Center, Chinese Academy of Sciences, Yangang Wang Computer Network Information Center, Chinese Academy of Sciences, Kehan Yao Hangzhou Dianzi University, Tiechui Yao Computer Network Information Center, Chinese Academy of Sciences, Jilin Zhang Hangzhou Dianzi University, Jian Wan Hangzhou Dianzi University | ||
16:00 20mTalk | Lifetime-based Optimization for Simulating Quantum Circuits on a New Sunway Supercomputer Main Conference Yaojian Chen Tsinghua University, Yong Liu National Supercomputer center in wuxi, Xinmin Shi Information Engineering University, Jiawei Song National Supercomputer center in wuxi, Xin Liu National Supercomputer center in wuxi, Lin Gan Tsinghua University, Chu Guo Information Engineering University, Haohuan Fu Tsinghua University, Jie Gao National Research Centre of Parallel Engineering and Technology, Dexun Chen National Supercomputer center in wuxi, Guangwen Yang Tsinghua University | ||
16:20 20mTalk | High-Performance Filters for GPUs Main Conference Hunter James McCoy University of Utah, Steven Hofmeyr Lawrence Berkeley National Laboratory, Katherine Yelick University of California at Berkeley & Lawrence Berkeley National Lab, Prashant Pandey University of Utah | ||
16:40 20mTalk | High-Performance and Scalable Agent-Based Simulation with BioDynaMo Main Conference Lukas Breitwieser European Organization for Nuclear Research (CERN), ETH Zurich, Ahmad Hesam Delft University of Technology, Fons Rademakers European Organization for Nuclear Research (CERN), Juan Gómez Luna ETH Zurich, Onur Mutlu ETH Zurich Pre-print Media Attached |
17:00 - 18:00 | |||
Tue 28 FebDisplayed time zone: Eastern Time (US & Canada) change
12:00 - 13:50 | |||
12:00 1h50mLunch | Lunch Catering |
13:50 - 15:10 | Session 5: DecompositionsMain Conference at Montreal 4 Chair(s): Milind Chabbi Uber Technologies Inc. | ||
13:50 20mTalk | TDC: Towards Extremely Efficient CNNs on GPUs via Hardware-Aware Tucker Decomposition Main Conference Lizhi Xiang University of utah, Miao Yin Rutgers University, Chengming Zhang Indiana University, Aravind Sukumaran-Rajam Meta, Saday Sadayappan University of Utah, USA, Bo Yuan Rutgers University, Dingwen Tao Indiana University | ||
14:10 20mTalk | Improving Energy Saving of One-sided Matrix Decompositions on CPU-GPU Heterogeneous Systems Main Conference Jieyang Chen University of Alabama at Birmingham, Xin Liang University of Kentucky, Kai Zhao University of Alabama at Birmingham, Hadi Zamani Sabzi University of California Riverside, Laxmi Bhuyan University of California, Riverside, zizhong chen University of California, Riverside | ||
14:30 20mTalk | End-to-End LU Factorization of Large Matrices on GPUs Main Conference Yang Xia , Peng Jiang The University of Iowa, Rajiv Ramnath The Ohio State University, Gagan Agrawal Augusta University | ||
14:50 20mTalk | Fast Eigenvalue Decomposition via WY Representation on Tensor Core Main Conference Shaoshuai Zhang University of Houston, Ruchi Shah University of Houston, Hiroyuki Ootomo Tokyo Institute of Technology, Rio Yokota Tokyo Institute of Technology, Panruo Wu University of Houston |
15:40 - 16:40 | |||
15:40 20mTalk | iQAN: Fast and Accurate Vector Search with Efficient Intra-Query Parallelism on Multi-Core Architectures Main Conference Zhen Peng William & Mary, Minjia Zhang Microsoft Research, Kai Li Kent State University, Ruoming Jin Kent State University, Bin Ren College of William & Mary | ||
16:00 20mTalk | WISE: Predicting the Performance of Sparse Matrix Vector Multiplication with Machine Learning Main Conference Serif Yesil University of Illinois Urbana-Champaign, Azin Heidarshenas University of Illinois Urbana-Champaign, Adam Morrison Tel Aviv University, Josep Torrellas University of Illinois at Urbana-Champaign | ||
16:20 20mTalk | Efficient Direct Convolution Using Long SIMD Instructions Main Conference Alexandre Santana Barcelona Supercomputing Center, Adrià Armejach Sanosa Barcelona Supercomputing Center & Universitat Politècnica de Catalunya, Marc Casas Barcelona Supercomputing Center |
16:40 - 18:00 | BreakMain Conference | ||
18:00 - 22:00 | |||
Wed 1 MarDisplayed time zone: Eastern Time (US & Canada) change
10:00 - 11:40 | Session 7: Machine LearningMain Conference at Montreal 4 Chair(s): Milind Kulkarni Purdue University | ||
10:00 20mTalk | TGOpt: Redundancy-Aware Optimizations for Temporal Graph Attention Networks Main Conference Yufeng Wang University of Illinois at Urbana-Champaign, Charith Mendis University of Illinois at Urbana-Champaign | ||
10:20 20mTalk | Dynamic N:M Fine-grained Structured Sparse Attention Mechanism Main Conference Zhaodong Chen University of California, Santa Barbara, Zheng Qu University of California, Santa Barbara, Yuying Quan University of California, Santa Barbara, Liu Liu , Yufei Ding UC Santa Barbara, Yuan Xie UCSB | ||
10:40 20mTalk | Elastic Averaging for Efficient Pipelined DNN Training Main Conference Zihao Chen East China Normal University, Chen Xu East China Normal University, Weining Qian East China Normal University, Aoying Zhou East China Normal University | ||
11:00 20mTalk | DSP: Efficient GNN Training with Multiple GPUs Main Conference Zhenkun Cai The Chinese University of Hong Kong, Qihui Zhou The Chinese University of Hong Kong, Xiao Yan Southern University of Science and Technology, Da Zheng Amazon Web Services, Xiang Song Amazon Web Services, Chenguang Zheng The Chinese University of Hong Kong, James Cheng The Chinese University of Hong Kong, George Karypis Amazon Web Services | ||
11:20 20mTalk | PiPAD: Pipelined and Parallel Dynamic GNN Training on GPUs Main Conference |
12:00 - 12:20 | |||
Accepted Papers
Call for Papers
PPoPP 2023: 28th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming
Montreal, Canada. (collocated with CC-2023, HPCA-2023 and CGO-2023) Dates: 25 February - 1 March, 2023.
Submission URL: https://ppopp23.hotcrp.com
Important dates:
- Full paper submission: August 17, 2022
- Author response period: October 26–October 28, 2022
- Author notification: November 7, 2022
- Artifact submission to AE committee: November 16, 2022
- Artifact notification by AE committee: December 30, 2022
- Final paper due: January 6, 2023
All deadlines are at midnight anywhere on earth (AoE), and are firm.
Scope:
PPoPP is the premier forum for leading work on all aspects of parallel programming, including theoretical foundations, techniques, languages, compilers, runtime systems, tools, and practical experience. In the context of the symposium, “parallel programming” encompasses work on concurrent and parallel systems (multicore, multi-threaded, heterogeneous, clustered, and distributed systems; grids; data centers; clouds; and large scale machines). Given the rise of parallel architectures in the consumer market (desktops, laptops, and mobile devices) and data centers, PPoPP is particularly interested in work that addresses new parallel workloads and issues that arise out of extreme-scale applications or cloud platforms, as well as techniques and tools that improve the productivity of parallel programming or work towards improved synergy with such emerging architectures.
Specific topics of interest include (but are not limited to):
- Compilers and runtime systems for parallel and heterogeneous systems
- Concurrent data structures
- Development, analysis, or management tools
- Fault tolerance for parallel systems
- Formal analysis and verification
- High-performance / scientific computing
- Libraries
- Middleware for parallel systems
- Parallel algorithms
- Parallel applications and frameworks
- Parallel programming for deep memory hierarchies including nonvolatile memory
- Parallel programming languages
- Parallel programming theory and models
- Parallelism in non-scientific workloads: web, search, analytics, cloud, machine learning
- Performance analysis, debugging and optimization
- Programming tools for parallel and heterogeneous systems
- Software engineering for parallel programs
- Software for heterogeneous architectures
- Software productivity for parallel programming
- Synchronization and concurrency control
Papers should report on original research relevant to parallel programming and should contain enough background materials to make them accessible to the entire parallel programming research community. Papers describing experience should indicate how they illustrate general principles or lead to new insights; papers about parallel programming foundations should indicate how they relate to practice. PPoPP submissions will be evaluated based on their technical merit and accessibility. Submissions should clearly motivate the importance of the problem being addressed, compare to the existing body of work on the topic, and explicitly and precisely state the paper’s key contributions and results towards addressing the problem. Submissions should strive to be accessible both to a broad audience and to experts in the area.
Paper Submission:
Conference submission site: https://ppopp23.hotcrp.com.
All submissions must be made electronically through the conference web site and include an abstract (100–400 words), author contact information, the full list of authors and their affiliations. Full paper submissions must be in PDF formatted printable on both A4 and US letter size paper.
All papers must be prepared in ACM Conference Format using the 2-column acmart format: use the SIGPLAN proceedings template acmart-sigplanproc-template.tex for Latex,and interim-layout.docx for Word. You may also want to consult the official ACM information on the Master Article Template and related tools. Important note: The Word template (interim-layout.docx) on the ACM website uses 9pt font; you need to increase it to 10pt.
Papers should contain a maximum of 10 pages of text (in a typeface no smaller than 10 point) or figures, NOT INCLUDING references. There is no page limit for references and they must include the name of all authors (not {et al.}). Appendices are not allowed, but the authors may submit supplementary material, such as proofs or source code; all supplementary material must be in PDF or ZIP format. Looking at supplementary material is at the discretion of the reviewers.
Submission is double blind and authors will need to identify any potential conflicts of interest with PC and Extended Review Committee members, as defined here: http://www.sigplan.org/Resources/Policies/Review/ (ACM SIGPLAN policy).
PPoPP 2023 will employ a double-blind reviewing process. To facilitate this process, submissions should not reveal the identity of the authors in any way. Authors should leave out author names and affiliations from the body of their submission. They should also ensure that any references to authors’ own related work should be in the third person (e.g., not “We build on our previous work …” but rather “We build on the work of …”). The purpose of this process is to help the PC and external reviewers come to an initial judgment about the paper without bias, not to make it impossible for them to discover the authors if they were to try. Nothing should be done in the name of anonymity that weakens the submission or makes the job of reviewing the paper more difficult. In particular, important background references should not be omitted or anonymized. In addition, authors should feel free to disseminate their ideas or draft versions of their paper as they normally would. For instance, authors may post drafts of their papers on the web or give talks on their research ideas. Authors with further questions on double-blind reviewing are encouraged to contact the Program Chairs by email.
Submissions should be in PDF and printable on both US Letter and A4 paper. Papers may be resubmitted to the submission site multiple times up until the deadline, but the last version submitted before the deadline will be the version reviewed. Papers that exceed the length requirement, that deviate from the expected format, or that are submitted late will be rejected.
All submissions that are not accepted for regular presentations will be automatically considered for posters. Two-page summaries of accepted posters will be included in the conference proceedings.
To allow reproducibility, we encourage authors of accepted papers to submit their papers for Artifact Evaluation (AE). The AE process begins after the acceptance notification, and is run by a separate committee whose task is to assess how the artifacts support the work described in the papers. Artifact evaluation is voluntary and will not affect paper acceptance, but will be taken into consideration when selecting papers for awards. Papers that go through the AE process successfully will receive one or several of the ACM reproducibility badges, printed on the papers themselves. More information will be posted on the AE website.
Deadlines expire at midnight anywhere on earth.
Publication Date:
The titles of all accepted papers are typically announced shortly after the author notification date (late November 2022). Note, however, that this is not the official publication date. The official publication date is the date the proceedings are made available in the ACM Digital Library. ACM will make the proceedings available via the Digital Library for one month, up to 2 weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work.
ACM Publications Policies:
By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including ACM’s new Publications Policy on Research Involving Human Participants and Subjects. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of your paper, in addition to other potential penalties, as per ACM Publications Policy." https://www.acm.org/publications/policies/research-involving-human-participants-and-subjects
Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process for your accepted paper. ACM has been involved in ORCID from the start and we have recently made a commitment to collect ORCID IDs from all of our published authors. The collection process has started and will roll out as a requirement throughout 2022. We are committed to improve author discoverability, ensure proper attribution and contribute to ongoing community efforts around name normalization; your ORCID ID will help in these efforts.