CGO 2024
Sat 2 - Wed 6 March 2024 Edinburgh, United Kingdom
Wed 6 Mar 2024 12:10 - 12:30 at Tinto - Acceleration Techniques Chair(s): Amir Shaikhha

Achieving high performance for Sparse MatrixMatrix Multiplication (SpMM) has received increasing research attention, especially on multi-core CPUs, due to the large input data size in applications such as graph neural networks (GNNs). Most existing solutions for SpMM computation follow the aheadof-time (AOT) compilation approach, which compiles a program entirely before it is executed. AOT compilation for SpMM faces three key limitations: unnecessary memory access, additional branch overhead, and redundant instructions. These limitations stem from the fact that crucial information pertaining to SpMM is not known until runtime. In this paper, we propose JITSPMM, a just-in-time (JIT) assembly code generation framework to accelerated SpMM computation on multi-core CPUs with SIMD extensions. First, JITSPMM integrates the JIT assembly code generation technique into three widely-used workload division methods for SpMM to achieve balanced workload distribution among CPU threads. Next, with the availability of runtime information, JITSPMM employs a novel technique, coarse-grain column merging, to maximize instruction-level parallelism by unrolling the performance-critical loop. Furthermore, JITSPMM intelligently allocates registers to cache frequently accessed data to minimizing memory accesses, and employs selected SIMD instructions to enhance arithmetic throughput. We conduct a performance evaluation of JITSPMM and compare it two AOT baselines. The first involves existing SpMM implementations compiled using the Intel icc compiler with auto-vectorization. The second utilizes the highly-optimized SpMM routine provided by Intel MKL. Our results show that JITSPMM provides an average improvement of 3.8× and 1.4×, respectively.

Wed 6 Mar

Displayed time zone: London change

11:30 - 12:50
Acceleration TechniquesMain Conference at Tinto
Chair(s): Amir Shaikhha University of Edinburgh
11:30
20m
Talk
A System-Level Dynamic Binary Translator using Automatically-Learned Translation Rules
Main Conference
Jinhu Jiang Fudan University, Chaoyi Liang Fudan University, Rongchao Dong Fudan University, Zhaohui Yang Fudan University, Zhongjun Zhou Fudan University, Wenwen Wang University of Georgia, Pen-Chung Yew University of Minnesota at Twin Cities, Weihua Zhang Fudan University
Pre-print
11:50
20m
Talk
Instruction Scheduling for the GPU on the GPU
Main Conference
Ghassan Shobaki California State University, Pınar Muyan-Özçelik California State University, Josh Hutton California State University, Bruce Linck California State University, Vladislav Malyshenko California State University, Austin Kerbow Advanced Micro Devices, Ronaldo Ramirez-Ortega California State University, Vahl Scott Gordon California State University
12:10
20m
Talk
JITSPMM: Just-in-Time Instruction Generation for Accelerated Sparse Matrix-Matrix Multiplication
Main Conference
Qiang Fu Advanced Micro Devices, Thomas B. Rolinger NVIDIA, H. Howie Huang George Washington University
Pre-print
12:30
20m
Talk
oneDNN Graph Compiler: A Hybrid Approach for High-Performance Deep Learning Compilation
Main Conference
Jianhui Li Intel, Zhennan Qin Intel, Yijie Mei Intel, Jingze Cui Intel, Yunfei Song Intel, Ciyong Chen Intel, Yifei Zhang Intel, Longsheng Du Intel, Xianhang Cheng Intel, Baihui Jin Intel, Yan Zhang Intel, Jason Ye Intel, Eric Lin Intel, Dan Lavery Intel
Pre-print