A 1.6-mW Sparse Deep Learning Accelerator for Speech Separation

Chih Chyau Yang, Tian Sheuan Chang

Research output: Contribution to journalArticlepeer-review


Low-power deep learning accelerators (DLAs) on the speech processing enable real-time applications on edge devices. However, most of the existing accelerators suffer from high-power consumption and focus on image applications only. This article presents a low-power accelerator for speech separation through algorithm and hardware optimizations. At the algorithm level, the model is compressed with structured sensitivity as well as unstructured pruning, and further quantized to the shifted 8-bit floating-point format instead of the 32-bit floating-point format. The computations with the zero kernel and zero activation values are skipped by decomposition of the dilated and transposed convolutions. At the hardware level, the compressed model is then supported by an architecture with eight independent multipliers and accumulators (MACs) with a simple zero-skipping hardware to take advantage of the activation sparsity and low-power processing. The proposed approach reduces the model size by 95.44% and computation complexity by 93.88%. The final implementation with the TSMC 40-nm process can achieve real-time speech separation and consumes 1.6-mW power when operated at 150 MHz. The normalized energy efficiency and area efficiency are 2.344 TOPS/W and 14.42 GOPS/mm<inline-formula> <tex-math notation="LaTeX">$^2$</tex-math> </inline-formula>, respectively.

Original languageEnglish
Pages (from-to)1-10
Number of pages10
JournalIEEE Transactions on Very Large Scale Integration (VLSI) Systems
StateAccepted/In press - 2023


  • Computational modeling
  • Convolution
  • Decoding
  • Deep learning
  • Deep learning accelerator (DLA)
  • Hardware acceleration
  • low power
  • model compression
  • model decomposition
  • Particle separators
  • Quantization (signal)
  • time-domain speech separation


Dive into the research topics of 'A 1.6-mW Sparse Deep Learning Accelerator for Speech Separation'. Together they form a unique fingerprint.

Cite this