The garbage collector (GC) is a crucial component of language runtimes, offering correctness guarantees and high productivity in exchange for a run-time overhead. Concurrent collectors run alongside application threads (mutators) and share CPU resources. A likely point of contention between mutators and GC threads and, consequently, a potential overhead source is the shared last-level cache (LLC).

This work builds on the hypothesis that the cache pollution caused by concurrent GCs hurts application performance. We validate this hypothesis with a cache-sensitive Java micro-benchmark. We find that concurrent GC activity may slow down the application by up to $3\times$ and increase the LLC misses by 3 orders of magnitude. However, when we extend our analysis to a suite of benchmarks representative for today's server workloads (Renaissance), we find that only 5 out of 23 benchmarks show a statistically significant correlation between GC-induced cache pollution and performance. Even for these, the performance overhead of GC does not exceed $10%$. Based on further analysis, we conclude that the lower impact of the GC on the performance of Renaissance benchmarks is due to their lack of sensitivity to LLC capacity.

Sun 18 Jun

Displayed time zone: Eastern Time (US & Canada) change

14:00 - 15:20
ISMM: Session 4 - Allocations and Garbage CollectionISMM 2023 at Magnolia 22
Chair(s): Tony Hosking Australian National University

#ismm-1400-session4-magnolia22 Discord icon small YouTube icon small

14:00
20m
Talk
Concurrent GCs and Modern Java Workloads: A Cache PerspectiveBest Paper Award
ISMM 2023
Maria Carpen-Amarie Huawei Zurich Research Center, Switzerland, Georgios Vavouliotis Huawei Zurich Research Center, Switzerland, Konstantinos Tovletoglou Huawei Zurich Research Center, Switzerland, Boris Grot University of Edinburgh, UK, Rene Mueller Huawei Zurich Research Center, Switzerland
DOI
14:20
20m
Talk
Wait-Free Weak Reference Counting
ISMM 2023
Matthew J. Parkinson Azure Research, Microsoft, UK, Sylvan Clebsch Azure Research, Ben Simner
DOI
14:40
20m
Talk
NUMAlloc: A Faster NUMA Memory Allocator
ISMM 2023
Hanmei Yang University of Massachusetts Amherst, Xin Zhao University of Massachusetts Amherst, Jin Zhou University of Massachusetts Amherst, Wei Wang University of Texas at San Antonio, USA, Sandip Kundu University of Massachusetts Amherst, Bo Wu Colorado School of Mines, Hui Guan University of Massachusetts, Amherst, Tongping Liu University of Massachusetts at Amherst
DOI
15:00
20m
Talk
Picking a CHERI Allocator: Security and Performance Considerations
ISMM 2023
Jacob Bramley Arm, Dejice Jacob University of Glasgow, UK, Andrei Lascu King's College London, Jeremy Singer University of Glasgow, Laurence Tratt King's College London, Andrei Lascu King's College London
DOI Pre-print