Energy-Efficient Accelerator Design With Tile-Based Row-Independent Compressed Memory for Sparse Compressed Convolutional Neural Networks

Po-Tsang Huang, I-Chen Wu, Chin-Yang Lo, Wei Hwang

研究成果: Article同行評審

3 引文 斯高帕斯(Scopus)

摘要

Deep convolutional neural networks (CNNs) are difficult to be fully deployed to edge devices because of both memory-intensive and computation-intensive workloads. The energy efficiency of CNNs is dominated by convolution computation and off-chip memory (DRAM) accesses, especially for DRAM accesses. In this article, an energy-efficient accelerator is proposed for sparse compressed CNNs by reducing DRAM accesses and eliminating zero-operand computation. Weight compression is utilized for sparse compressed CNNs to reduce the required memory capacity/bandwidth and a large portion of connections. Thus, a tile-based row-independent compression (TRC) method with relative indexing memory is adopted for storing none-zero terms. Additionally, the workloads are distributed based on channels to increase the degree of task parallelism, and all-row-to-all-row non-zero element multiplication is adopted for skipping redundant computation. The simulation results over the dense accelerator show that the proposed accelerator achieves 1.79× speedup and reduces 23.51%, 69.53%, 88.67% on-chip memory size, energy, and DRAM accesses of VGG-16.
原文American English
頁(從 - 到)131=143
期刊IEEE Open Journal of Circuits and Systems
2
DOIs
出版狀態Published - 1月 2021

指紋

深入研究「Energy-Efficient Accelerator Design With Tile-Based Row-Independent Compressed Memory for Sparse Compressed Convolutional Neural Networks」主題。共同形成了獨特的指紋。

引用此