ISMM 2021
Tue 22 Jun 2021 PLDI
co-located with PLDI 2021

Modern C++ server workloads rely on 2 MB huge pages to improve memory system performance via higher TLB hit rates with larger address space coverage. Huge pages have traditionally been supported at the kernel level, but recent work shows user-level, huge page-aware memory allocators can achieve higher huge page coverage and thus performance. These memory allocators deal with a trade-off: 1) allocate memory from the operating system (OS) at the granularity of a huge page, achieve high performance, but potentially waste memory due to fragmentation, or 2) limit fragmentation by breaking up huge pages into smaller 4 KB pages and returning them to the OS, but reduce performance due to lower huge page coverage.

For example, TCMalloc’s huge page-aware memory allocation handles this trade-off by releasing memory to the operating system at a configurable release rate, breaking up huge pages as necessary. This approach balances performance and limiting fragmentation well for machines running one workload. For multiple applications on the same machine however, the reduction in memory usage is only useful to overall performance if another application will use it. In warehouse-scale computers, when an application releases and then requires the same amount or more memory quickly, but no other application uses the memory in the meantime, the release causes poorer huge page coverage without any system-wide benefit.

We introduce an adaptive release policy that dynamically determines whether or not to break up huge pages and return them to the OS to optimize system-wide performance. We built this policy into TCMalloc, a state-of-the-art memory allocator that uses huge pages. We deploy this strategy fleet-wide in warehouse-scale datacenters, delivering significant performance improvements at negligible real memory overhead, leading to a 1% fleet-wide throughput improvement.

Tue 22 Jun

Displayed time zone: Eastern Time (US & Canada) change

13:30 - 16:15
Session 2: Paging/Structuring & Session 3: Allocating/Copying ISMM 2021 at ISMM
Chair(s): Doug Lea State University of New York (SUNY) Oswego, Benjamin Zorn Microsoft Research
13:30
30m
Talk
Radiant: Efficient Page Table Management for Tiered Memory Systems
ISMM 2021
Sandeep Kumar Intel Labs, Aravinda Prasad Intel Labs, Smruti Ranjan Sarangi IIT Delhi, Sreenivas Subramoney Intel Labs
DOI Pre-print
14:00
30m
Talk
Compendia: Reducing Virtual-Memory Costs via Selective Densification
ISMM 2021
Sam Ainsworth University of Edinburgh, UK, Timothy M. Jones University of Cambridge, UK
Pre-print Media Attached
14:30
45m
Meeting
ISMM Business Meeting
ISMM 2021
Tobias Wrigstad Uppsala University, Sweden
15:15
30m
Talk
Adaptive Huge-Page Subrelease for Non-Moving Memory Allocators in Warehouse-Scale Computers
ISMM 2021
Martin Maas Google Research, Chris Kennelly Google, Khanh Nguyen Texas A&M University, Darryl Gove Google, Kathryn S McKinley Google, Paul Turner Google
15:45
30m
Talk
automemcpy A framework for automatic generation of fundamental memory operations
ISMM 2021
Guillaume Chatelet Google Research, Chris Kennelly Google, Sam Xi Google, Ondrej Sykora Google Research, Clement Courbet Google Research, David Li Google, Bruno De Backer Google Research
DOI Pre-print