A Novel Repetition Normalized Adversarial Reward for Headline Generation

Peng Xu, Pascale Fung

Research output: Chapter in Book/Conference Proceeding/ReportConference Paper published in a bookpeer-review

Abstract

While reinforcement learning can effectively improve language generation models, it often suffers from generating incoherent and repetitive phrases [1]. In this paper, we propose a novel repetition normalized adversarial reward to mitigate these problems. Our repetition penalized reward can greatly reduce the repetition rate and adversarial training mitigates generating incoherent phrases. Our model significantly outperforms the baseline model on ROUGE-1 (+3.24), ROUGE-L (+2.25), and a decreased repetition-rate (-4.98%).

Original languageEnglish
Title of host publication2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages7325-7329
Number of pages5
ISBN (Electronic)9781479981311
DOIs
Publication statusPublished - May 2019
Event44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Brighton, United Kingdom
Duration: 12 May 201917 May 2019

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume2019-May
ISSN (Print)1520-6149

Conference

Conference44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019
Country/TerritoryUnited Kingdom
CityBrighton
Period12/05/1917/05/19

Bibliographical note

Publisher Copyright:
© 2019 IEEE.

Keywords

  • adversarial training
  • headline generation
  • reinforcement learning
  • summarization

Fingerprint

Dive into the research topics of 'A Novel Repetition Normalized Adversarial Reward for Headline Generation'. Together they form a unique fingerprint.

Cite this