A Framework of Composite Functional Gradient Methods for Generative Adversarial Models

Rie Johnson, Tong Zhang*

*Corresponding author for this work

Research output: Contribution to journalJournal Articlepeer-review

Abstract

Generative adversarial networks (GAN) are trained through a minimax game between a generator and a discriminator to generate data that mimics observations. While being widely used, GAN training is known to be empirically unstable. This paper presents a new theory for generative adversarial methods that does not rely on the traditional minimax formulation. Our theory shows that with a strong discriminator, a good generator can be obtained by composite functional gradient learning, so that several distance measures (including the KL divergence and the JS divergence) between the probability distributions of real data and generated data are simultaneously improved after each functional gradient step until converging to zero. This new point of view leads to stable procedures for training generative models. It also gives a new theoretical insight into the original GAN. Empirical results on image generation show the effectiveness of our new method.

Original languageEnglish
Article number8744312
Pages (from-to)17-32
Number of pages16
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume43
Issue number1
DOIs
Publication statusPublished - 1 Jan 2021

Bibliographical note

Publisher Copyright:
© 1979-2012 IEEE.

Keywords

  • Generative adversarial models
  • functional gradient learning
  • image generation
  • neural networks

Fingerprint

Dive into the research topics of 'A Framework of Composite Functional Gradient Methods for Generative Adversarial Models'. Together they form a unique fingerprint.

Cite this