Optimized Two-Level Parallelization for GPU Accelerators using the Polyhedral Model
While GPUs play an increasingly important role in today's
high-performance computers, optimizing GPU performance continues to
impose large burdens upon programmers. A major challenge in
optimizing codes for GPUs stems from the two levels of hardware
parallelism, blocks and threads; each of these levels has
significantly different characteristics, requiring different
optimization strategies.
In this paper, we propose a novel compiler optimization algorithm for
GPU parallelism. Our approach is based on the polyhedral model, which
has enabled significant advances in program analysis and
transformation compared to traditional AST-based frameworks. We
extend polyhedral schedules to enable two-level parallelization
through the idea of superposition, which integrates separate
schedules for block-level and thread-level parallelism. Our
experimental results demonstrate that our proposed compiler
optimization framework can deliver 1.8$\times$ and 2.1$\times$
geometric mean improvements on NVIDIA Tesla M2050 and K80 GPUs,
compared to a state-of-the-art polyhedral parallel code generator
(PPCG) for GPGPUs.
Sun 5 FebDisplayed time zone: Saskatchewan, Central America change
10:30 - 12:10 | |||
10:30 25mTalk | Partially Redundant Fence Elimination for x86, ARM, and Power Processors Research Papers DOI | ||
10:55 25mTalk | Lightweight Data Race Detection for Production Runs Research Papers Swarnendu Biswas University of Texas at Austin, Man Cao Ohio State University, Minjia Zhang Ohio State University, Michael D. Bond Ohio State University, Benjamin P. Wood Wellesley College, USA DOI | ||
11:20 25mTalk | Optimized Two-Level Parallelization for GPU Accelerators using the Polyhedral Model Research Papers Jun Shirako Rice University, USA, Akihiro Hayashi Rice University, USA, Vivek Sarkar Rice University, USA DOI | ||
11:45 25mTalk | Optimization Space Pruning without Regrets Research Papers Ulysse Beaugnon , Antoine Pouille ENS, France, Marc Pouzet , Jacques Pienaar Google, USA, Albert Cohen INRIA DOI |