Efficient distributed learning with sparsity

Tong Zhang, Nathan Srebro, Mladen KoIar, Jialei Wang

Research output: Contribution to conferenceConference Paperpeer-review

Abstract

We propose a novel, efficient approach for distributed sparse learning with observations randomly partitioned across machines. In each round of the proposed method, worker machines compute the gradient of the loss on local data and the master machine solves a shifted l(1) regularized loss minimization problem. After a number of communication rounds that scales only log-arithmically with the number of machines, and independent of other parameters of the problem, the proposed approach provably matches the estimation error bound of centralized methods.
Original languageEnglish
Pages3636-3645
Publication statusPublished - Aug 2017
Externally publishedYes
EventProceedings of Machine Learning Research -
Duration: 1 Aug 20171 Aug 2017

Conference

ConferenceProceedings of Machine Learning Research
Period1/08/171/08/17

Fingerprint

Dive into the research topics of 'Efficient distributed learning with sparsity'. Together they form a unique fingerprint.

Cite this