Communication-efficient distributed optimization using an approximate Newton-type method

Ohad Shamir, Nathan Srebro, Tong Zhang

Research output: Chapter in Book/Conference Proceeding/ReportConference Paper published in a bookpeer-review

Abstract

We present a novel Newton-type method for distributed optimization, which is particularly well suited for stochastic optimization and learning problems. For quadratic objectives, the method enjoys a linear rate of convergence which prov- ably improves with the data size, requiring an essentially constant number of iterations under reasonable assumptions. We provide theoretical and empirical evidence of the advantages of our method compared to other approaches, such as one-shot parameter averaging and ADMM.

Original languageEnglish
Title of host publication31st International Conference on Machine Learning, ICML 2014
PublisherInternational Machine Learning Society (IMLS)
Pages2665-2681
Number of pages17
ISBN (Electronic)9781634393973
Publication statusPublished - 2014
Externally publishedYes
Event31st International Conference on Machine Learning, ICML 2014 - Beijing, China
Duration: 21 Jun 201426 Jun 2014

Publication series

Name31st International Conference on Machine Learning, ICML 2014
Volume3

Conference

Conference31st International Conference on Machine Learning, ICML 2014
Country/TerritoryChina
CityBeijing
Period21/06/1426/06/14

Bibliographical note

Publisher Copyright:
Copyright © (2014) by the International Machine Learning Society (IMLS) All rights reserved.

Fingerprint

Dive into the research topics of 'Communication-efficient distributed optimization using an approximate Newton-type method'. Together they form a unique fingerprint.

Cite this