A 1.6-mW Sparse Deep Learning Accelerator for Speech Separation

Chih Chyau Yang, Tian Sheuan Chang

研究成果: Article同行評審

1 引文 斯高帕斯(Scopus)


Low-power deep learning accelerators (DLAs) on the speech processing enable real-time applications on edge devices. However, most of the existing accelerators suffer from high-power consumption and focus on image applications only. This article presents a low-power accelerator for speech separation through algorithm and hardware optimizations. At the algorithm level, the model is compressed with structured sensitivity as well as unstructured pruning, and further quantized to the shifted 8-bit floating-point format instead of the 32-bit floating-point format. The computations with the zero kernel and zero activation values are skipped by decomposition of the dilated and transposed convolutions. At the hardware level, the compressed model is then supported by an architecture with eight independent multipliers and accumulators (MACs) with a simple zero-skipping hardware to take advantage of the activation sparsity and low-power processing. The proposed approach reduces the model size by 95.44% and computation complexity by 93.88%. The final implementation with the TSMC 40-nm process can achieve real-time speech separation and consumes 1.6-mW power when operated at 150 MHz. The normalized energy efficiency and area efficiency are 2.344 TOPS/W and 14.42 GOPS/mm<inline-formula> <tex-math notation="LaTeX">$^2$</tex-math> </inline-formula>, respectively.

頁(從 - 到)1-10
期刊IEEE Transactions on Very Large Scale Integration (VLSI) Systems
出版狀態Accepted/In press - 2023


深入研究「A 1.6-mW Sparse Deep Learning Accelerator for Speech Separation」主題。共同形成了獨特的指紋。