Hardware-Software Co-optimization of Memory Management in Dynamic Languages
Dynamic programming languages are becoming increasingly popular, yet often show a significant performance slowdown compared to static languages. In this paper, we study the performance overhead of automatic memory management in dynamic languages. We propose to improve the performance and memory bandwidth usage of dynamic languages by co-optimizing garbage collection overhead and cache performance for newly-initialized and dead objects. Our study shows that less frequent garbage collection results in a large number of cache misses for initial stores to new objects. We solve this problem by directly placing uninitialized objects into on-chip caches without off-chip memory accesses. We further optimize the garbage collection by reducing unnecessary cache pollution and write-backs through partial tracing that invalidates dead objects between full garbage collections. Experimental results on PyPy and V8 show that less frequent garbage collection along with our optimizations can significantly improve the performance of dynamic languages.
Mon 18 Jun (GMT-04:00) Eastern Time (US & Canada) change
|14:00 - 14:30|
|14:30 - 15:00|
Rodrigo BrunoINESC-ID / Instituto Superior Técnico, University of Lisbon, Paulo FerreiraINESC-ID / Instituto Superior Técnico, University of Lisbon, Ruslan SynytskyJelastic, n.n., Tetiana FydorenchykJelastic, n.n., Jia RaoUniversity of Texas at Arlington, USA, Hang HuangHuazhong University of Science and Technology, China, Song WuHuazhong University of Science and Technology, China
|15:00 - 15:30|