Guided Learning of Nonconvex Models through Successive Functional Gradient Optimization

Rie Johnson*, Tong Zhang*

*Corresponding author for this work

Research output: Contribution to conferenceConference Paperpeer-review

Abstract

This paper presents a framework of successive functional gradient optimization for training nonconvex models such as neural networks, where training is driven by mirror descent in a function space. We provide a theoretical analysis and empirical study of the training method derived from this framework. It is shown that the method leads to better performance than that of standard training techniques.
Original languageEnglish
Pages4921-4930
Publication statusPublished - Jul 2020
EventProceedings of Machine Learning Research -
Duration: 1 Jul 20201 Jul 2020

Conference

ConferenceProceedings of Machine Learning Research
Period1/07/201/07/20

Fingerprint

Dive into the research topics of 'Guided Learning of Nonconvex Models through Successive Functional Gradient Optimization'. Together they form a unique fingerprint.

Cite this