Sparse linear algebra is central to many scientific codes, yet compilers fail to optimize it well. High performance acceleration libraries are available, but adoption costs are significant. Furthermore, libraries tie programs into vendor specific software and hardware ecosystems, creating non-portable code.
In this paper, we develop a new approach based on our specification Language for implementers of Linear Algebra Computations (LiLAC). Rather than requiring the application developer to (re)write every program for a given library, the burden is shifted to a one-off description by the library implementer. Using this, the LiLAC-enabled compiler then inserts appropriate library routines automatically, without source code changes.
LiLAC provides automatic data marshaling, maintaining state between calls as needed and minimizing data transfers. Appropriate places for library replacement are detected at compiler intermediate representation level, independent of source languages.
We evaluate on legacy large-scale scientific applications written in FORTRAN; standard benchmarks written in C/C++ and FORTRAN; and C++ graph analytics kernels. Across heterogeneous platforms, applications and data sets we show performance improvements of 1.1x to over 10x without any user intervention.
Sat 22 FebDisplayed time zone: Pacific Time (US & Canada) change
10:30 - 12:00
|Bitwidth Customization in Image Processing Pipelines using Interval Analysis and SMT Solvers|
|Is Stateful Packrat Parsing Really Linear in Practice? -- A Counter-Example, An Improved Grammar and Its Parsing Algorithms --|
|Automatically Harnessing Sparse Acceleration|
|Compiling First-order Functions to Session-Typed Parallel Code|