The voice-based Internet of Multimedia Things (IoMT) is the combination of IoT interfaces and protocols with associated voice-related information, which enables advanced applications based on human-to-device interactions. An example is Automatic Speech Recognition (ASR) for live captioning and voice translation. Three major issues of ASR for IoMT are IoT development cost, speech recognition accuracy, and execution time complexity. For the first issue, most non-voice IoT applications are upgraded with the ASR feature through hard coding, which are error prone. For the second issue, recognition accuracy must be improved for ASR. For the third issue, many multimedia IoT services are real-time applications and, therefore, the ASR delay must be short.This article elaborates on the above issues based on an IoT platform called VoiceTalk. We built the largest Taiwanese spoken corpus to train VoiceTalk ASR (VT-ASR) and show how the VT-ASR mechanism can be transparently integrated with existing IoT applications. We consider two performance measures for VoiceTalk: speech recognition accuracy and VT-ASR delay. For the acoustic tests of PAL-Labs, VT-ASR's accuracy is 96.47%, while Google's accuracy is 94.28%. We are the first to develop an analytic model to investigate the probability that the VT-ASR delay for the first speaker is complete before the second speaker starts talking. From the measurements and analytic modeling, we show that the VT-ASR delay is short enough to result in a very good user experience. Our solution has won several important government and commercial TV contracts in Taiwan. VT-ASR has demonstrated better Taiwanese Mandarin speech recognition accuracy than famous commercial products (including Google and Iflytek) in Formosa Speech Recognition Challenge 2018 (FSR-2018) and was the best among all participating ASR systems for Taiwanese recognition accuracy in FSR-2020.
- Automatic speech recognition
- computational linguistics