Sparse tensors are prevalent in many data-intensive applications. However, existing automatic differentiation (AD) frameworks are tailored towards dense tensors, which
makes it a challenge to efficiently compute gradients through sparse tensor operations. This is due to irregular sparsity patterns that can result in substantial memory and computational overheads.
We propose a novel framework that enables the efficient AD of sparse tensors. The key aspects of our work include a compilation pipeline leveraging two intermediate DSLs with AD-agnostic domain-specific optimizations followed by efficient C++ code generation. We showcase the effectiveness of our framework in terms of performance and scalability through extensive experimentation, outperforming state-of-the-art alternatives across a variety of synthetic and real-world datasets.
Mon 4 MarDisplayed time zone: London change
10:00 - 11:00 | Compilers for machine learningMain Conference at Tinto Chair(s): Fabrice Rastello University Grenoble Alpes - Inria - CNRS - Grenoble INP - LIG | ||
10:00 20mTalk | A Tensor Algebra Compiler for Sparse Differentiation Main Conference Amir Shaikhha University of Edinburgh, Mathieu Huot University of Oxford, Shideh Hashemian University of Edinburgh | ||
10:20 20mTalk | Energy-Aware Tile Size Selection for Affine Programs on GPUs Main Conference Malith Jayaweera Northeastern University, Martin Kong Ohio State University, Yanzhi Wang Northeastern University, David Kaeli Northeastern University Pre-print | ||
10:40 20mTalk | PolyTOPS: Reconfigurable and Flexible Polyhedral Scheduler Main Conference Gianpietro Consolaro Huawei Technologies; Mines Paris-PSL, Zhen Zhang Huawei Technologies, Harenome Razanajato Huawei Technologies, Nelson Lossing Huawei Technologies, Nassim Tchoulak Huawei Technologies, Adilla Susungi Huawei Technologies, Artur Cesar Araujo Alves Huawei Technologies, Renwei Zhang Huawei Technologies, Denis Barthou Huawei Technologies, Corinne Ancourt Mines Paris-PSL, Cédric Bastoul Huawei Technologies Pre-print |