Autoencoding HRTFS for DNN Based HRTF Personalization Using Anthropometric Features

Tzu Yu Chen, Tzu Hsuan Kuo, Tai-Shih Chi

研究成果: Conference contribution同行評審

29 引文 斯高帕斯(Scopus)

摘要

We proposed a deep neural network (DNN) based approach to synthesize the magnitude of personalized head-related transfer functions (HRTFs) using anthropometric features of the user. To mitigate the over-fitting problem when training dataset is not very large, we built an autoencoder for dimensional reduction and establishing a crucial feature set to represent the raw HRTFs. Then we combined the decoder part of the autoencoder with a smaller DNN to synthesize the magnitude HRTFs. In this way, the complexity of the neural networks was greatly reduced to prevent unstable results with large variance due to overfitting. The proposed approach was compared with a baseline DNN model with no autoencoder. The log-spectral distortion (LSD) metric was used to evaluate the performance. Experiment results show that the proposed approach can reduce LSD of estimated HRTFs with greater stability.

原文English
主出版物標題2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings
發行者Institute of Electrical and Electronics Engineers Inc.
頁面271-275
頁數5
ISBN(電子)9781479981311
DOIs
出版狀態Published - 1 5月 2019
事件44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Brighton, United Kingdom
持續時間: 12 5月 201917 5月 2019

出版系列

名字ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
2019-May
ISSN(列印)1520-6149

Conference

Conference44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019
國家/地區United Kingdom
城市Brighton
期間12/05/1917/05/19

指紋

深入研究「Autoencoding HRTFS for DNN Based HRTF Personalization Using Anthropometric Features」主題。共同形成了獨特的指紋。

引用此