A speech command control-based recognition system for dysarthric patients based on deep learning technology

Yu Yi Lin, Wei Zhong Zheng, Wei Chung Chu, Ji Yan Han, Ying Hsiu Hung, Guan Min Ho, Chia Yuan Chang, Ying Hui Lai*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

13 Scopus citations


Voice control is an important way of controlling mobile devices; however, using it remains a challenge for dysarthric patients. Currently, there are many approaches, such as automatic speech recognition (ASR) systems, being used to help dysarthric patients control mobile devices. However, the large computation power requirement for the ASR system increases implementation costs. To alleviate this problem, this study proposed a convolution neural network (CNN) with a phonetic posteriorgram (PPG) speech feature system to recognize speech commands, called CNN–PPG; meanwhile, the CNN model with Mel-frequency cepstral coefficient (CNN–MFCC model) and ASRbased systems were used for comparison. The experiment results show that the CNN–PPG system provided 93.49% accuracy, better than the CNN–MFCC (65.67%) and ASR-based systems (89.59%). Additionally, the CNN–PPG used a smaller model size comprising only 54% parameter numbers compared with the ASR-based system; hence, the proposed system could reduce implementation costs for users. These findings suggest that the CNN–PPG system could augment a communication device to help dysarthric patients control the mobile device via speech commands in the future.

Original languageEnglish
Article number2477
JournalApplied Sciences (Switzerland)
Issue number6
StatePublished - 2 Mar 2021


  • Deep learning
  • Dysarthric speech
  • Health care
  • Internet of Things (IoT)
  • Mobile device
  • Signal processing
  • Speech command recognition (SCR)


Dive into the research topics of 'A speech command control-based recognition system for dysarthric patients based on deep learning technology'. Together they form a unique fingerprint.

Cite this