Multiresolution spectrotemporal analysis of complex sounds

Tai-Shih Chi*, Powen Ru, Shihab A. Shamma

*此作品的通信作者

研究成果: Article同行評審

511 引文 斯高帕斯(Scopus)

摘要

A computational model of auditory analysis is described that is inspired by psychoacoustical and neurophysiological findings in early and central stages of the auditory system. The model provides a unified multiresolution representation of the spectral and temporal features likely critical in the perception of sound. Simplified, more specifically tailored versions of this model have already been validated by successful application in the assessment of speech intelligibility [Elhilali et al., Speech Commun. 41(2-3), 331-348 (2003); Chi et al., J. Acoust. Soc. Am. 106, 2719-2732 (1999)] and in explaining the perception of monaural phase sensitivity [R. Carlyon and S. Shamma, J. Acoust. Soc. Am. 114, 333-348 (2003)]. Here we provide a more complete mathematical formulation of the model, illustrating how complex signals are transformed through various stages of the model, and relating it to comparable existing models of auditory processing. Furthermore, we outline several reconstruction algorithms to resynthesize the sound from the model output so as to evaluate the fidelity of the representation and contribution of different features and cues to the sound percept.

原文English
頁(從 - 到)887-906
頁數20
期刊Journal of the Acoustical Society of America
118
發行號2
DOIs
出版狀態Published - 8月 2005

指紋

深入研究「Multiresolution spectrotemporal analysis of complex sounds」主題。共同形成了獨特的指紋。

引用此