Variational recurrent neural networks for speech separation

Jen-Tzung Chien, Kuan Ting Kuo

    研究成果: Conference article同行評審

    18 引文 斯高帕斯(Scopus)

    摘要

    We present a new stochastic learning machine for speech separation based on the variational recurrent neural network (VRNN). This VRNN is constructed from the perspectives of generative stochastic network and variational auto-encoder. The idea is to faithfully characterize the randomness of hidden state of a recurrent neural network through variational learning. The neural parameters under this latent variable model are estimated by maximizing the variational lower bound of log marginal likelihood. An inference network driven by the variational distribution is trained from a set of mixed signals and the associated source targets. A novel supervised VRNN is developed for speech separation. The proposed VRNN provides a stochastic point of view which accommodates the uncertainty in hidden states and facilitates the analysis of model construction. The masking function is further employed in network outputs for speech separation. The benefit of using VRNN is demonstrated by the experiments on monaural speech separation.

    原文English
    頁(從 - 到)1193-1197
    頁數5
    期刊Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
    2017-August
    DOIs
    出版狀態Published - 1 一月 2017
    事件18th Annual Conference of the International Speech Communication Association, INTERSPEECH 2017 - Stockholm, Sweden
    持續時間: 20 八月 201724 八月 2017

    指紋

    深入研究「Variational recurrent neural networks for speech separation」主題。共同形成了獨特的指紋。

    引用此