Hardware-Software Co-optimization of Memory Management in Dynamic Languages
Dynamic programming languages are becoming increasingly popular, yet often show a significant performance slowdown compared to static languages. In this paper, we study the performance overhead of automatic memory management in dynamic languages. We propose to improve the performance and memory bandwidth usage of dynamic languages by co-optimizing garbage collection overhead and cache performance for newly-initialized and dead objects. Our study shows that less frequent garbage collection results in a large number of cache misses for initial stores to new objects. We solve this problem by directly placing uninitialized objects into on-chip caches without off-chip memory accesses. We further optimize the garbage collection by reducing unnecessary cache pollution and write-backs through partial tracing that invalidates dead objects between full garbage collections. Experimental results on PyPy and V8 show that less frequent garbage collection along with our optimizations can significantly improve the performance of dynamic languages.
Mon 18 JunDisplayed time zone: Eastern Time (US & Canada) change
14:00 - 15:30 | |||
14:00 30mTalk | Hardware-Software Co-optimization of Memory Management in Dynamic Languages ISMM 2018 | ||
14:30 30mTalk | Dynamic Vertical Memory Scalability for OpenJDK Cloud Applications ISMM 2018 Rodrigo Bruno INESC-ID / Instituto Superior Técnico, University of Lisbon, Paulo Ferreira INESC-ID / Instituto Superior Técnico, University of Lisbon, Ruslan Synytsky Jelastic, n.n., Tetiana Fydorenchyk Jelastic, n.n., Jia Rao University of Texas at Arlington, USA, Hang Huang Huazhong University of Science and Technology, China, Song Wu Huazhong University of Science and Technology, China | ||
15:00 30mTalk | OMR: Out-of-Core MapReduce for Large Data Sets ISMM 2018 Gurneet Kaur , Keval Vora University of California, Riverside, Sai Charan Koduru University of California, Riverside, Rajiv Gupta UC Riverside |