Team Yao at Factify 2022: Utilizing Pre-trained Models and Co-attention Networks for Multi-Modal Fact Verification

Wei Yao Wang*, Wen Chih Peng

*此作品的通信作者

研究成果: Conference article同行評審

2 引文 斯高帕斯(Scopus)

摘要

In recent years, social media has enabled users to get exposed to a myriad of misinformation and disinformation; thus, misinformation has attracted a great deal of attention in research fields and as a social issue. To address the problem, we propose a framework, Pre-CoFact, composed of two pre-trained models for extracting features from text and images, and multiple co-attention networks for fusing the same modality but different sources and different modalities. Besides, we adopt the ensemble method by using different pre-trained models in Pre-CoFact to achieve better performance. We further illustrate the effectiveness from the ablation study and examine different pre-trained models for comparison. Our team, Yao, won the fifth prize (F1-score: 74.585%) in the Factify challenge hosted by De-Factify @ AAAI 2022, which demonstrates that our model achieved competitive performance without using auxiliary tasks or extra information. The source code of our work is publicly available.

原文English
期刊CEUR Workshop Proceedings
3168
出版狀態Published - 2022
事件1st Workshop on Multimodal Fact-Checking and Hate Speech Detection, DE-FACTIFY 2022 - Virtual, Vancouver, Canada
持續時間: 27 2月 2022 → …

指紋

深入研究「Team Yao at Factify 2022: Utilizing Pre-trained Models and Co-attention Networks for Multi-Modal Fact Verification」主題。共同形成了獨特的指紋。

引用此