Compendia: Reducing Virtual-Memory Costs via Selective Densification
Virtual-to-physical memory translation is becoming an increasingly dominant cost in workload execution; as data sizes scale, up to four memory accesses are required per translation, and 24 in virtualised systems. However, the radix trees in use today to hold these translations have many favorable properties, including cacheability, ability to fit in conventional 4kiB page frames, and a sparse representation, and so they are unlikely to be replaced in the near future.
In this paper we argue that these structures are actually too sparse for modern workloads, so many of the overheads are unnecessary. Instead, where appropriate, we expand groups of 4kiB layers, each able to translate 9 bits of address space, into a single 2MiB layer, able to translate 18 bits in a single memory access. These fit in the standard huge-page allocations used by most conventional operating systems and architectures. With minor extensions to the page-table-walker structures to support these, and aid in their cacheability, we can reduce memory accesses per walk by 27%, or 56% for virtualised systems, without significant memory overhead.
Tue 22 JunDisplayed time zone: Eastern Time (US & Canada) change
13:30 - 16:15
|Radiant: Efficient Page Table Management for Tiered Memory Systems
Sandeep Kumar Intel Labs, Aravinda Prasad Intel Labs, Smruti Ranjan Sarangi IIT Delhi, Sreenivas Subramoney Intel LabsDOI Pre-print
|Compendia: Reducing Virtual-Memory Costs via Selective Densification
ISMM 2021Pre-print Media Attached
|ISMM Business Meeting
Tobias Wrigstad Uppsala University, Sweden
|Adaptive Huge-Page Subrelease for Non-Moving Memory Allocators in Warehouse-Scale Computers
|automemcpy A framework for automatic generation of fundamental memory operations
Guillaume Chatelet Google Research, Chris Kennelly Google, Sam Xi Google, Ondrej Sykora Google Research, Clement Courbet Google Research, David Li Google, Bruno De Backer Google ResearchDOI Pre-print