TY - JOUR
T1 - Toward enriched decoding of mandarin spontaneous speech
AU - Deng, Yu Chih
AU - Liao, Yuan Fu
AU - Wang, Yih Ru
AU - Chen, Sin Horng
N1 - Publisher Copyright:
© 2023
PY - 2023/10
Y1 - 2023/10
N2 - A deep neural network (DNN)-based automatic speech recognition (ASR) method for enriched decoding of Mandarin spontaneous speech is proposed. It adopts an enhanced approach over the baseline model built with factored time delay neural networks (TDNN-f) and rescored with RNNLM to first building a baseline system composed of a TDNN-f acoustic model (AM), a trigram language model (LM), and a recurrent neural network language model (RNNLM) to generate a word lattice. It then sequentially incorporates a multi-task Part-of-Speech-RNNLM (POS-RNNLM), a hierarchical prosodic model (HPM), and a reduplication-word LM (RLM) into the decoding process by expanding the word lattice and performing rescoring to improve recognition performance and enrich the decoding output with syntactic parameters of POS and punctuation (PM), prosodic tags of word-juncture break types and syllable prosodic states, and an edited recognition text with reduplication words being eliminated. Experimental results on the Mandarin conversational dialogue corpus (MCDC) showed that SER, CER, and WER of 13.2 %, 13.9 %, and 19.1 % were achieved when incorporating the POS-RNNLM and HPM into the baseline system. They represented relative SER, CER, and WER reductions of 7.7 %, 7.9 % and 5.0 % as comparing with those of the baseline system. Futhermore, the use of the RLM resulted in additional 3 %, 4.6 %, and 4.5 % relative SER, CER, and WER reductions through eliminating reduplication words.
AB - A deep neural network (DNN)-based automatic speech recognition (ASR) method for enriched decoding of Mandarin spontaneous speech is proposed. It adopts an enhanced approach over the baseline model built with factored time delay neural networks (TDNN-f) and rescored with RNNLM to first building a baseline system composed of a TDNN-f acoustic model (AM), a trigram language model (LM), and a recurrent neural network language model (RNNLM) to generate a word lattice. It then sequentially incorporates a multi-task Part-of-Speech-RNNLM (POS-RNNLM), a hierarchical prosodic model (HPM), and a reduplication-word LM (RLM) into the decoding process by expanding the word lattice and performing rescoring to improve recognition performance and enrich the decoding output with syntactic parameters of POS and punctuation (PM), prosodic tags of word-juncture break types and syllable prosodic states, and an edited recognition text with reduplication words being eliminated. Experimental results on the Mandarin conversational dialogue corpus (MCDC) showed that SER, CER, and WER of 13.2 %, 13.9 %, and 19.1 % were achieved when incorporating the POS-RNNLM and HPM into the baseline system. They represented relative SER, CER, and WER reductions of 7.7 %, 7.9 % and 5.0 % as comparing with those of the baseline system. Futhermore, the use of the RLM resulted in additional 3 %, 4.6 %, and 4.5 % relative SER, CER, and WER reductions through eliminating reduplication words.
KW - Disfluencies and paralinguistic phenomena
KW - Hierarchical prosodic model
KW - Mandarin conversational dialogue corpus
KW - Part-of-speech language model
KW - Reduplication-word language model
KW - Spontaneous speech recognition
UR - http://www.scopus.com/inward/record.url?scp=85171748205&partnerID=8YFLogxK
U2 - 10.1016/j.specom.2023.102983
DO - 10.1016/j.specom.2023.102983
M3 - Article
AN - SCOPUS:85171748205
SN - 0167-6393
VL - 154
JO - Speech Communication
JF - Speech Communication
M1 - 102983
ER -