The disparity between processing and storage speeds can be bridged in part by reducing the traffic into and out of the slower memory components. Some recent studies reduce such traffic by determining dead data in cache, showing that a significant fraction of writes can be squashed before they make the trip toward slower memory. In this paper, we examine a technique for eliminating traffic in the other direction, specifically the traffic induced by dynamic storage allocation. We consider recycling dead storage in cache to satisfy a program’s storage-allocation requests.
We first evaluate the potential for recycling under favorable circumstances, where the associated logic can run at full speed with no impact on the cache’s normal behavior. We then consider a more practical implementation, in which the associated logic executes independently from the cache’s critical path. Here, the cache’s performance is unfettered by recycling, but the operations necessary to determine dead storage and recycle such storage execute as time is available. Finally, we present the design and analysis of a hardware implementation that scales well with cache size without sacrificing too much performance.
Sun 14 JunDisplayed time zone: Tijuana, Baja California change
16:00 - 17:15 | |||
16:00 25mTalk | Recycling Trash in Cache Research Papers Jonathan Shidal Washington University, Ari J. Spilo Washington University, Paul T. Scheid Washington University, Ron K. Cytron Washington University, Krishna M. Kavi University of North Texas Link to publication | ||
16:25 25mTalk | Reducing Pause Times With Clustered Collection Research Papers Link to publication | ||
16:50 25mTalk | The Judgment of Forseti: Economic Utility for Dynamic Heap Sizing of Multiple Runtimes Research Papers Link to publication |