Gradient sparsification for communication-efficient distributed optimization

Jianqiao Wangni, Ji Liu, Jialei Wang, Tong Zhang

Research output: Contribution to journalConference article published in journalpeer-review

427 Citations (Scopus)

Abstract

Modern large-scale machine learning applications require stochastic optimization algorithms to be implemented on distributed computational architectures. A key bottleneck is the communication overhead for exchanging information such as stochastic gradients among different workers. In this paper, to reduce the communication cost, we propose a convex optimization formulation to minimize the coding length of stochastic gradients. The key idea is to randomly drop out coordinates of the stochastic gradient vectors and amplify the remaining coordinates appropriately to ensure the sparsified gradient to be unbiased. To solve the optimal sparsification efficiently, a simple and fast algorithm is proposed for an approximate solution, with a theoretical guarantee for sparseness. Experiments on `2-regularized logistic regression, support vector machines and convolutional neural networks validate our sparsification approaches.

Original languageEnglish
Pages (from-to)1299-1309
Number of pages11
JournalAdvances in Neural Information Processing Systems
Volume2018-December
Publication statusPublished - 2018
Externally publishedYes
Event32nd Conference on Neural Information Processing Systems, NeurIPS 2018 - Montreal, Canada
Duration: 2 Dec 20188 Dec 2018

Bibliographical note

Publisher Copyright:
© 2018 Curran Associates Inc..All rights reserved.

Fingerprint

Dive into the research topics of 'Gradient sparsification for communication-efficient distributed optimization'. Together they form a unique fingerprint.

Cite this