Deep neural factorization for speech recognition

Jen-Tzung Chien, Chen Shen

    研究成果: Conference article同行評審

    2 引文 斯高帕斯(Scopus)


    Conventional speech recognition system is constructed by unfolding the spectral-temporal input matrices into one-way vectors and using these vectors to estimate the affine parameters of neural network according to the vector-based error backpropagation algorithm. System performance is constrained because the contextual correlations in frequency and time horizons are disregarded and the spectral and temporal factors are excluded. This paper proposes a spectral-temporal factorized neural network (STFNN) to tackle this weakness. The spectral-temporal structure is preserved and factorized in hidden layers through two ways of factor matrices which are trained by using the factorized error backpropagation. Affine transformation in standard neural network is generalized to the spectro-temporal factorization in STFNN. The structural features or patterns are extracted and forwarded towards the softmax outputs. A deep neural factorization is built by cascading a number of factorization layers with fully-connected layers for speech recognition. An orthogonal constraint is imposed in factor matrices for redundancy reduction. Experimental results show the merit of integrating the factorized features in deep feedforward and recurrent neural networks for speech recognition.

    頁(從 - 到)3682-3686
    期刊Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
    出版狀態Published - 1 一月 2017
    事件18th Annual Conference of the International Speech Communication Association, INTERSPEECH 2017 - Stockholm, Sweden
    持續時間: 20 八月 201724 八月 2017


    深入研究「Deep neural factorization for speech recognition」主題。共同形成了獨特的指紋。