Analyzing Memory Management Methods on Integrated CPU-GPU Systems
Heterogeneous systems that integrate a multicore CPU and a GPU on the same die are ubiquitous. On these systems, both the CPU and GPU share the same physical memory as opposed to using separate memory dies. Although integration eliminates the need to copy data between the CPU and the GPU, arranging transparent memory sharing between the two devices can carry large overheads. Memory on CPU/GPU systems is typically managed by a software framework such as OpenCL or CUDA, a runtime library, and a GPU driver. These frameworks offer a range of memory management methods that vary in ease of use, consistency guarantees and performance. In this study, we analyze some of the common memory management methods of the most widely used software frameworks for heterogeneous systems: CUDA, OpenCL 1.2, OpenCL 2.0, and HSA, on NVIDIA and AMD hardware. We focus on performance/functionality trade-offs, with the goal of exposing their performance impact and simplifying the choice of memory management methods for programmers.
Sun 18 JunDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
14:00 - 15:30 | |||
14:00 30mTalk | Analyzing Memory Management Methods on Integrated CPU-GPU Systems ISMM 2017 | ||
14:30 30mTalk | Continuous Checkpointing of HTM Transactions in NVM ISMM 2017 | ||
15:00 30mTalk | RTHMS: A Tool for Data Placement on Hybrid Memory System ISMM 2017 Ivy Bo Peng KTH Royal Institute of Technology, Roberto Gioiosa Pacific Northwest National Laboratory, Gokcen Kestor Pacific Northwest National Laboratory, Stefano Markidis KTH Royal Institute of Technology, Pietro Cicotti San Diego Supercomputer Center, Erwin Laure KTH Royal Institute of Technology |