A Low-Power Streaming Speech Enhancement Accelerator for Edge Devices

Ci Hao Wu, Tian Sheuan Chang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Transformer-based speech enhancement models yield impressive results. However, their heterogeneous and complex structure restricts model compression potential, resulting in greater complexity and reduced hardware efficiency. Additionally, these models are not tailored for streaming and low-power applications. Addressing these challenges, this paper proposes a low-power streaming speech enhancement accelerator through model and hardware optimization. The proposed high performance model is optimized for hardware execution with the co-design of model compression and target application, which reduces 93.9% of model size by the proposed domain-aware and streaming-aware pruning techniques. The required latency is further reduced with batch normalization-based transformers. Additionally, we employed softmax-free attention, complemented by an extra batch normalization, facilitating simpler hardware design. The tailored hardware accommodates these diverse computing patterns by breaking them down into element-wise multiplication and accumulation (MAC). This is achieved through a 1-D processing array, utilizing configurable SRAM addressing, thereby minimizing hardware complexities and simplifying zero skipping. Using the TSMC 40nm CMOS process, the final implementation requires merely 207.8K gates and 53.75KB SRAM. It consumes only 8.08 mW for real-time inference at a 62.5MHz frequency.

Original languageEnglish
Pages (from-to)128-140
Number of pages13
JournalIEEE Open Journal of Circuits and Systems
Volume5
DOIs
StatePublished - 2024

Keywords

  • Speech enhancement
  • hardware implementation
  • low power
  • transformer

Fingerprint

Dive into the research topics of 'A Low-Power Streaming Speech Enhancement Accelerator for Edge Devices'. Together they form a unique fingerprint.

Cite this