Spectrum restoration from multiscale auditory phase singularities by generalized projections

Tai-Shih Chi*, Shihab A. Shamma

*此作品的通信作者

研究成果: Article同行評審

3 引文 斯高帕斯(Scopus)

摘要

We examine the encoding of acoustic spectra by parameters derived from singularities found in their multiscale auditory representations. The multiscale representation is a wavelet transform of an auditory version of the spectrum, formulated based on findings of perceptual experiments and physiological research in the auditory cortex. The multiscale representation of a spectral pattern usually contains well-defined singularities in its phase function that reflect prominent features of the underlying spectrum such as its relative peak locations and amplitudes. Properties (locations and strength) of these singularities are examined and employed to reconstruct the original spectrum by using an iterative projection algorithm. Although the singularities form a nonconvcx set, simulations demonstrate that a well-chosen initial pattern usually converges on a good approximation of the input spectrum. Perceptually intelligible speech can be rcsynthesized from the reconstructed auditory spectrograms, and hence these singularities can potentially serve as efficient features in speech compression. Besides, the singularities are very noise-robust which makes them useful features in various applications such as vowel recognition and speaker identification.

原文English
頁(從 - 到)1179-1192
頁數14
期刊IEEE Transactions on Audio, Speech and Language Processing
14
發行號4
DOIs
出版狀態Published - 1 7月 2006

指紋

深入研究「Spectrum restoration from multiscale auditory phase singularities by generalized projections」主題。共同形成了獨特的指紋。

引用此