Team NYCU-NLP at PAN 2024: Integrating Transformers with Similarity Adjustments for Multi-Author Writing Style Analysis

Tzu Mi Lin, Yu Hsin Wu, Lung Hao Lee*

*此作品的通信作者

研究成果: Conference article同行評審

1 引文 斯高帕斯(Scopus)

摘要

This paper describes our NYCU-NLP system design for multi-author writing style analysis tasks of the PAN Lab at CLEF 2024. We propose a unified architecture integrating transformer-based models with similarity adjustments to identify author switches within a given multi-author document. We first fine-tune the RoBERTa, DeBERTa and ERNIE transformers to detect differences in writing style in two given paragraphs. The output prediction is then determined by the ensemble mechanism. We also use similarity adjustments to further enhance multi-author analysis performance. The experimental data contains three difficulty levels to reflect simultaneous changes of authorship and topic. Our submission achieved a macro F1-score of 0.964, 0.857 and 0.863 respectively for the easy, medium and hard levels, ranking first and second, respectively for hard and medium levels out of 16 and 17 participating teams.

原文English
頁(從 - 到)2716-2721
頁數6
期刊CEUR Workshop Proceedings
3740
出版狀態Published - 2024
事件25th Working Notes of the Conference and Labs of the Evaluation Forum, CLEF 2024 - Grenoble, 法國
持續時間: 9 9月 202412 9月 2024

指紋

深入研究「Team NYCU-NLP at PAN 2024: Integrating Transformers with Similarity Adjustments for Multi-Author Writing Style Analysis」主題。共同形成了獨特的指紋。

引用此