Towards Memory-Efficient Processing-in-Memory Architecture for Convolutional Neural Networks
Convolutional neural networks (CNNs) are widely adopted in artificial intelligent systems. In contrast to conventional computingcentric applications, the computational and memory resources of CNN applications are mixed together in the network weights. This incurs a significant amount of data movement, especially for highdimensional convolutions. Although recent embedded 3D-stacked Processing-in-Memory (PIM) architecture alleviates this memory bottleneck to provide fast near-data processing, memory is still a limiting factor of the entire system. An unsolved key challenge is how to efficiently allocate convolutions to 3D-stacked PIMto combine the advantages of both neural and computational processing.
This paper presents Memolution, a compiler-based memoryefficient data allocation strategy for convolutional neural networks on PIM architecture. Memolution offers thread-level parallelism that can fully exploit the computational power of PIMarchitecture. The objective is to capture the characteristics of neural network applications and present a hardware-independent design to transparently allocate CNN applications onto the underlining hardware resources provided by PIM. We demonstrate the viability of the proposed technique using a variety of realistic convolutional neural network applications. Our extensive evaluations show that, Memolution significantly improves performance and the cache utilization compared to the baseline scheme.
Thu 22 Jun
|10:30 - 10:55|
|10:55 - 11:20|
|11:20 - 11:45|
A Lightweight Progress Maximization Scheduler for Non-Volatile Processor Under Unstable Energy Harvesting
|11:45 - 12:10|