Auditory spectrum based features (ASBF) for robust speech recognition

Chi H. Yim, Oscar C. Au, Wanggen Wan, Cyan L. Keung, Carrson C. Fung

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Scopus citations

Abstract

MFCC are features commonly used in speech recognition systems today. The recognition accuracy of systems using MFCC is known to be high in clean speech environment, but it drops greatly in noisy environment. In this paper, we propose new features called the auditory spectrum based features (ASBF) that are based on the cochlear model of the human auditory system. These new features can track the formants and the selection scheme of these features is based on the second order difference cochlear model and the primary auditory nerve processing model. In our experiment, the performance of MFCC and the ASBF are compared in clean and noisy environments. The results suggest that the ASBF are much more robust to noise than MFCC.

Original languageEnglish
Title of host publication6th International Conference on Spoken Language Processing, ICSLP 2000
PublisherInternational Speech Communication Association
Number of pages4
ISBN (Electronic)7801501144, 9787801501141
StatePublished - Oct 2000
Event6th International Conference on Spoken Language Processing, ICSLP 2000 - Beijing, China
Duration: 16 Oct 200020 Oct 2000

Publication series

Name6th International Conference on Spoken Language Processing, ICSLP 2000

Conference

Conference6th International Conference on Spoken Language Processing, ICSLP 2000
Country/TerritoryChina
CityBeijing
Period16/10/0020/10/00

Fingerprint

Dive into the research topics of 'Auditory spectrum based features (ASBF) for robust speech recognition'. Together they form a unique fingerprint.

Cite this