Learning English–Chinese bilingual word representations from sentence-aligned parallel corpus

An Zi Yen, Hen Hsen Huang*, Hsin Hsi Chen

*此作品的通信作者

研究成果: Article同行評審

6 引文 斯高帕斯(Scopus)

摘要

Representation of words in different languages is fundamental for various cross-lingual applications. In the past researches, there was an argument in using or not using word alignment in learning bilingual word representations. This paper presents a comprehensive empirical study on the uses of parallel corpus to learn the word representations in the embedding space. Various non-alignment and alignment approaches are explored to formulate the contexts for Skip-gram modeling. In the approaches without word alignment, concatenating A and B, concatenating B and A, interleaving A with B, shuffling A and B, and using A and B separately are considered, where A and B denote parallel sentences in two languages. In the approaches with word alignment, three word alignment tools, including GIZA++, TsinghuaAligner, and fast_align, are employed to align words in sentences A and B. The effects of alignment direction from A to B or from B to A are also discussed. To deal with the unaligned words in the word alignment approach, two alternatives, using the words aligned with their immediate neighbors and using the words in the interleaving approach, are explored. We evaluate the performance of the adopted approaches in four tasks, including bilingual dictionary induction, cross-lingual information retrieval, cross-lingual analogy reasoning, and cross-lingual word semantic relatedness. These tasks cover the issues of translation, reasoning, and information access. Experimental results show the word alignment approach with conditional interleaving achieves the best performance in most of the tasks.

原文English
頁(從 - 到)52-72
頁數21
期刊Computer Speech and Language
56
DOIs
出版狀態Published - 7月 2019

指紋

深入研究「Learning English–Chinese bilingual word representations from sentence-aligned parallel corpus」主題。共同形成了獨特的指紋。

引用此