CC 2025
Sat 1 - Sun 2 March 2025
Sun 2 Mar 2025 09:00 - 09:30 at Bristlecone_ - Machine Learning and PL II Chair(s): Fernando Magno Quintão Pereira

Performance models are essential for automatic code optimization, enabling compilers to predict the effects of code transformations on performance and guide search for optimal transformations. Building state-of-the-art performance models with deep learning, however, requires vast labeled datasets of random programs – an expensive and time-consuming process, stretching over months. This paper introduces a self-supervised pre-training scheme with autoencoders to reduce the need for labeled data. By pre-training on a large dataset of random programs, the autoencoder learns representations of code and transformations, which are then used to embed programs for the performance model. Implemented in the Tiramisu autoscheduler, our approach improves model accuracy with less data. For example, to achieve a MAPE of 20.72%, the original model requires 18 million data points, whereas our method achieves a similar MAPE of 22.44% with only 3.6 million data points, reducing data requirements by 5x.

Sun 2 Mar

Displayed time zone: Pacific Time (US & Canada) change

09:00 - 10:00
Machine Learning and PL IIMain Conference at Bristlecone_
Chair(s): Fernando Magno Quintão Pereira Federal University of Minas Gerais
09:00
30m
Talk
Data-efficient Performance Modeling via Pre-training
Main Conference
Chunting Liu New York University Abu Dhabi, Riyadh Baghdadi New York University Abu Dhabi
09:30
30m
Talk
MimIrADe: Automatic Differentiation in a Higher-Order Sea-of-Nodes IR
Main Conference
Marcel Ullrich Saarland University, Sebastian Hack Saarland University, Saarland Informatics Campus, Roland Leißa University of Mannheim, School of Business Informatics and Mathematics
Link to publication
:
:
:
: