VidLife: A Dataset for Life Event Extraction from Videos

Tai Te Chu, An Zi Yen, Wei Hong Ang, Hen Hsen Huang, Hsin Hsi Chen

研究成果: Conference contribution同行評審

2 引文 斯高帕斯(Scopus)

摘要

Filming video blogs, which is shortened to vlog, becomes a popular way for people to record their life experiences in recent years. In this work, we present a novel task that is aimed at extracting life events from videos and constructing personal knowledge bases of individuals. In contrast to most existing researches in the field of computer vision that focus on identifying low-level script-like activities such as moving boxes, our goal is to extract life events where high-level activities like moving into a new house are recorded. The challenges to be tackled include: (1) identifying which objects in a given scene related to the life events of the protagonist we concern, and (2) determining the association between an extracted visual concept and a more high-level description of a video clip. To address the research issues, we construct a video life event extraction dataset VidLife by exploiting videos from the TV series The Big Bang Theory, in which the plot is around the daily lives of several characters. A pilot multitask learning model is proposed to extract life events given video clips and subtitles for storing in the personal knowledge base.

原文English
主出版物標題CIKM 2021 - Proceedings of the 30th ACM International Conference on Information and Knowledge Management
發行者Association for Computing Machinery
頁面4436-4444
頁數9
ISBN(電子)9781450384469
DOIs
出版狀態Published - 26 10月 2021
事件30th ACM International Conference on Information and Knowledge Management, CIKM 2021 - Virtual, Online, 澳大利亞
持續時間: 1 11月 20215 11月 2021

出版系列

名字International Conference on Information and Knowledge Management, Proceedings

Conference

Conference30th ACM International Conference on Information and Knowledge Management, CIKM 2021
國家/地區澳大利亞
城市Virtual, Online
期間1/11/215/11/21

指紋

深入研究「VidLife: A Dataset for Life Event Extraction from Videos」主題。共同形成了獨特的指紋。

引用此