Personal knowledge base construction from multimodal data

An Zi Yen, Chia Chung Chang, Hen Hsen Huang, Hsin Hsi Chen

研究成果: Conference contribution同行評審

4 引文 斯高帕斯(Scopus)

摘要

With the passage of time, people often have misty memories of their past experiences. Information recall support for people by collecting personal lifelogs is emerging. Recently, people tend to record their daily life via filming Video Weblog (VLog), which contains visual and audio data. These large scale multimodal data can be used to support information recall service that enables users to query their past experiences. The challenging issue is the semantic gap between the visual concept and the textual query. In this paper, we aim to extract personal life events from vlogs shared on YouTube and construct a personal knowledge base (PKB) for individuals. A multitask learning model is proposed to extract the components of personal life events, such as subjects, predicates and objects. The evaluation is performed on a video collection from three YouTubers who are English native speakers. Experimental results show our model achieves promising performance.

原文English
主出版物標題ICMR 2021 - Proceedings of the 2021 International Conference on Multimedia Retrieval
發行者Association for Computing Machinery, Inc
頁面496-500
頁數5
ISBN(電子)9781450384636
DOIs
出版狀態Published - 24 8月 2021
事件11th ACM International Conference on Multimedia Retrieval, ICMR 2021 - Taipei, 台灣
持續時間: 16 11月 202119 11月 2021

出版系列

名字ICMR 2021 - Proceedings of the 2021 International Conference on Multimedia Retrieval

Conference

Conference11th ACM International Conference on Multimedia Retrieval, ICMR 2021
國家/地區台灣
城市Taipei
期間16/11/2119/11/21

指紋

深入研究「Personal knowledge base construction from multimodal data」主題。共同形成了獨特的指紋。

引用此