Composite Functional Gradient Learning of Generative Adversarial Models

Rie Johnson*, Tong Zhang

*Corresponding author for this work

Research output: Contribution to conferenceConference Paperpeer-review

Abstract

This paper first presents a theory for generative adversarial methods that does not rely on the traditional minimax formulation. It shows that with a strong discriminator, a good generator can be learned so that the KL divergence between the distributions of real data and generated data improves after each functional gradient step until it converges to zero. Based on the theory, we propose a new stable generative adversarial method. A theoretical insight into the original GAN from this new viewpoint is also provided. The experiments on image generation show the effectiveness of our new method.
Original languageEnglish
Pages2371-2379
Publication statusPublished - Jul 2018
Externally publishedYes
EventProceedings of Machine Learning Research -
Duration: 1 Jul 20181 Jul 2018

Conference

ConferenceProceedings of Machine Learning Research
Period1/07/181/07/18

Fingerprint

Dive into the research topics of 'Composite Functional Gradient Learning of Generative Adversarial Models'. Together they form a unique fingerprint.

Cite this