DLS 2020
Sun 15 - Fri 20 November 2020 Online Conference
co-located with SPLASH 2020
Wed 18 Nov 2020 18:00 - 18:20 at SPLASH-III - 5 Chair(s): Patrick Cousot, Sukyoung Ryu
Thu 19 Nov 2020 06:00 - 06:20 at SPLASH-III - 5 Chair(s): Xavier Rival, Sukyoung Ryu

Execution times may be reduced by offloading parallel loop nests to a GPU. Auto-parallelizing compilers are common for static languages, often using a cost model to determine when the GPU execution speed will outweigh the offload overheads. Nowadays scientific software is increasingly written in dynamic languages and would benefit from compute accelerators. The
ALPyNA framework analyses moderately complex Python loop nests and automatically JIT compiles code for heterogeneous CPU and GPU architectures.

We present the first analytical cost model for auto-parallelizing loop nests in a dynamic language on heterogeneous architectures. Predicting execution time in a language like Python is extremely challenging, since aspects like the element types, size of the iteration space, and amenability to parallelization can only be determined at runtime. Hence the cost model must be both staged, to combine compile and run-time information, and lightweight to minimize runtime overhead. GPU execution time prediction must account for factors like data transfer, block-structured execution, and starvation.

We show that a comparatively simple, staged analytical model can accurately determine during execution when it is profitable to offload a loop nest. We evaluate our model on three heterogeneous platforms across 360 experiments with 12 loop-intensive Python benchmark programs. The results show small misprediction intervals and a mean slowdown of just 13.6%, relative to the optimal (oracular) offload strategy.

Wed 18 Nov

Displayed time zone: Central Time (US & Canada) change

17:00 - 18:20
5DLS 2020 / SAS at SPLASH-III +12h
Chair(s): Patrick Cousot New York University, Sukyoung Ryu
17:00
20m
Research paper
Abstract Neural Networks
SAS
Matthew Sotoudeh University of California, Davis, Aditya V. Thakur University of California, Davis
Pre-print Media Attached
17:20
20m
Talk
Amalgamating Different JIT Compilations in a Meta-tracing JIT Compiler Framework
DLS 2020
Yusuke Izawa Tokyo Institute of Technology, Hidehiko Masuhara Tokyo Institute of Technology
Link to publication DOI Pre-print Media Attached
17:40
20m
Research paper
Probabilistic Lipschitz Analysis of Neural NetworksArtifact
SAS
Ravi Mangal Georgia Institute of Technology, Kartik Sarangmath Georgia Institute of Technology, Aditya Nori , Alessandro Orso Georgia Tech
Pre-print Media Attached
18:00
20m
Talk
Pricing Python Parallelism: A Dynamic Language Cost Model for Heterogeneous Platforms
DLS 2020
Dejice Jacob University of Glasgow, UK, Phil Trinder University of Glasgow, Jeremy Singer Glasgow University
Link to publication DOI Pre-print Media Attached

Thu 19 Nov

Displayed time zone: Central Time (US & Canada) change

05:00 - 06:20
5SAS / DLS 2020 at SPLASH-III
Chair(s): Xavier Rival INRIA/CNRS/ENS Paris, Sukyoung Ryu
05:00
20m
Research paper
Abstract Neural Networks
SAS
Matthew Sotoudeh University of California, Davis, Aditya V. Thakur University of California, Davis
Pre-print Media Attached
05:20
20m
Talk
Amalgamating Different JIT Compilations in a Meta-tracing JIT Compiler Framework
DLS 2020
Yusuke Izawa Tokyo Institute of Technology, Hidehiko Masuhara Tokyo Institute of Technology
Link to publication DOI Pre-print Media Attached
05:40
20m
Research paper
Probabilistic Lipschitz Analysis of Neural NetworksArtifact
SAS
Ravi Mangal Georgia Institute of Technology, Kartik Sarangmath Georgia Institute of Technology, Aditya Nori , Alessandro Orso Georgia Tech
Pre-print Media Attached
06:00
20m
Talk
Pricing Python Parallelism: A Dynamic Language Cost Model for Heterogeneous Platforms
DLS 2020
Dejice Jacob University of Glasgow, UK, Phil Trinder University of Glasgow, Jeremy Singer Glasgow University
Link to publication DOI Pre-print Media Attached