ZARA: Improving Few-Shot Self-Rationalization for Small Language Models

Wei Lin Chen, An Zi Yen, Cheng Kuang Wu, Hen Hsen Huang, Hsin Hsi Chen

研究成果: Conference contribution同行評審

1 引文 斯高帕斯(Scopus)

摘要

Language models (LMs) that jointly generate end-task answers as well as free-text rationales are known as self-rationalization models. Recent works demonstrate great performance gain for self-rationalization by few-shot prompting LMs with rationale-augmented exemplars. However, the ability to benefit from explanations only emerges with large-scale LMs, which have poor accessibility. In this work, we explore the less-studied setting of leveraging explanations for small LMs to improve few-shot self-rationalization. We first revisit the relationship between rationales and answers. Inspired by the implicit mental process of how human beings assess explanations, we present a novel approach, Zero-shot Augmentation of Rationale-Answer pairs (ZARA), to automatically construct pseudo-parallel data for self-training by reducing the problem of plausibility judgement to natural language inference. Experimental results show ZARA achieves SOTA performance on the FEB benchmark, for both the task accuracy and the explanation metric. In addition, we conduct human and quantitative evaluation validating ZARA's ability to automatically identify plausible and accurate rationale-answer pairs.

原文English
主出版物標題Findings of the Association for Computational Linguistics
主出版物子標題EMNLP 2023
發行者Association for Computational Linguistics (ACL)
頁面4682-4693
頁數12
ISBN(電子)9798891760615
DOIs
出版狀態Published - 2023
事件2023 Findings of the Association for Computational Linguistics: EMNLP 2023 - Singapore, 新加坡
持續時間: 6 12月 202310 12月 2023

出版系列

名字Findings of the Association for Computational Linguistics: EMNLP 2023

Conference

Conference2023 Findings of the Association for Computational Linguistics: EMNLP 2023
國家/地區新加坡
城市Singapore
期間6/12/2310/12/23

指紋

深入研究「ZARA: Improving Few-Shot Self-Rationalization for Small Language Models」主題。共同形成了獨特的指紋。

引用此