SPEC: Summary Preference Decomposition for Low-Resource Abstractive Summarization

Yi Syuan Chen, Yun Zhu Song, Hong Han Shuai

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

Neural abstractive summarization has been widely studied and achieved great success with large-scale corpora. However, the considerable cost of annotating data motivates the need for learning strategies under low-resource settings. In this paper, we investigate the problems of learning summarizers with only few examples and propose corresponding methods for improvements. First, typical transfer learning methods are prone to be affected by data properties and learning objectives in the pretext tasks. Therefore, based on pretrained language models, we further present a meta learning framework to transfer few-shot learning processes from source corpora to the target corpus. Second, previous methods learn from training examples without decomposing the <italic>content</italic> and <italic>preference</italic>. The generated summaries could therefore be constrained by the preference bias in the training set, especially under low-resource settings. As such, we propose decomposing the contents and preferences during learning through the parameter modulation, which enables control over preferences during inference. Third, given a target application, specifying required preferences could be non-trivial because the preferences may be difficult to derive through observations. Therefore, we propose a novel decoding method to automatically estimate suitable preferences and generate corresponding summary candidates from the few training examples. Extensive experiments demonstrate that our methods achieve state-of-the-art performance on six diverse corpora with 30.11&#x0025;/33.95&#x0025;/27.51&#x0025; and 26.74&#x0025;/31.14&#x0025;/24.48&#x0025; average improvements on ROUGE-1/2/L under 10- and 100-example settings.

Original languageEnglish
Pages (from-to)1-16
Number of pages16
JournalIEEE/ACM Transactions on Audio Speech and Language Processing
DOIs
StateAccepted/In press - 2022

Keywords

  • abstractive summarization
  • Adaptation models
  • Law enforcement
  • Low-resource learning
  • self-supervised learning
  • Speech processing
  • Task analysis
  • Testing
  • Training
  • transfer learning
  • Transfer learning

Fingerprint

Dive into the research topics of 'SPEC: Summary Preference Decomposition for Low-Resource Abstractive Summarization'. Together they form a unique fingerprint.

Cite this