Accelerated stochastic gradient method for composite regularization

Leon Wenliang Zhong, James T. Kwok

Research output: Contribution to journalConference article published in journalpeer-review

Abstract

Regularized risk minimization often involves nonsmooth optimization. This can be particularly challenging when the regularizer is a sum of simpler regularizers, as in the overlapping group lasso. Very recently, this is alleviated by using the proximal average, in which an implicitly nonsmooth function is employed to approximate the composite regularizer. In this paper, we propose a novel extension with accelerated gradient method for stochastic optimization. On both general convex and strongly convex problems, the resultant approximation errors reduce at a faster rate than methods based on stochastic smoothing and ADMM. This is also verified experimentally on a number of synthetic and real-world data sets.

Original languageEnglish
Pages (from-to)1086-1094
Number of pages9
JournalJournal of Machine Learning Research
Volume33
Publication statusPublished - 2014
Event17th International Conference on Artificial Intelligence and Statistics, AISTATS 2014 - Reykjavik, Iceland
Duration: 22 Apr 201425 Apr 2014

Fingerprint

Dive into the research topics of 'Accelerated stochastic gradient method for composite regularization'. Together they form a unique fingerprint.

Cite this