Write a Blog >>
Sun 18 Jun 2017 16:30 - 17:00 at Vertex WS218 - Afternoon talks 2 Chair(s): P. Sadayappan

Nowadays, GPU accelerators are widely used in areas with large data-parallel computations such as scientific computations or neural networks. Programmers can either write code in low-level CUDA/OpenCL code or use a GPU extension for a high-level programming language for better productivity. Most extensions focus on statically-typed languages, but many programmers prefer dynamically-typed languages due to their simplicity and flexibility.

This paper shows how programmers can write high-level modular code in Ikra, a Ruby extension for array-based GPU computing. Programmers can compose GPU programs of multiple reusable parallel sections, which are subsequently fused into a small number of GPU kernels. We propose a seamless syntax for separating code regions that extensively use dynamic language features from those that are compiled for efficent execution. Moreover, we propose symbolic execution and a program analysis for kernel fusion to achieve performance that is close to hand-written CUDA code.

Sun 18 Jun

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

16:00 - 17:30
Afternoon talks 2ARRAY at Vertex WS218
Chair(s): P. Sadayappan Ohio State University
16:00
30m
Talk
Efficient Array Slicing on the Intel Xeon Phi Coprocessor
ARRAY
Benjamin Andreassen Norwegian University of Science and Technology, Jan Christian Norwegian University of Science and Technology, Lasse Natvig Norwegian University of Science and Technology
DOI File Attached
16:30
30m
Talk
Modular Array-based GPU Computing in a Dynamically-typed Language
ARRAY
Matthias Springer Tokyo Institute of Technology, Peter Wauligmann Tokyo Institute of Technology, Hidehiko Masuhara Tokyo Institute of Technology
DOI File Attached
17:00
30m
Talk
HPTT: A High-Performance Tensor Transposition C++ Library
ARRAY
DOI File Attached