Abstract
We present a novel Newton-type method for distributed optimization, which is particularly well suited for stochastic optimization and learning problems. For quadratic objectives, the method enjoys a linear rate of convergence which prov- ably improves with the data size, requiring an essentially constant number of iterations under reasonable assumptions. We provide theoretical and empirical evidence of the advantages of our method compared to other approaches, such as one-shot parameter averaging and ADMM.
| Original language | English |
|---|---|
| Title of host publication | 31st International Conference on Machine Learning, ICML 2014 |
| Publisher | International Machine Learning Society (IMLS) |
| Pages | 2665-2681 |
| Number of pages | 17 |
| ISBN (Electronic) | 9781634393973 |
| Publication status | Published - 2014 |
| Externally published | Yes |
| Event | 31st International Conference on Machine Learning, ICML 2014 - Beijing, China Duration: 21 Jun 2014 → 26 Jun 2014 |
Publication series
| Name | 31st International Conference on Machine Learning, ICML 2014 |
|---|---|
| Volume | 3 |
Conference
| Conference | 31st International Conference on Machine Learning, ICML 2014 |
|---|---|
| Country/Territory | China |
| City | Beijing |
| Period | 21/06/14 → 26/06/14 |
Bibliographical note
Publisher Copyright:Copyright © (2014) by the International Machine Learning Society (IMLS) All rights reserved.
Fingerprint
Dive into the research topics of 'Communication-efficient distributed optimization using an approximate Newton-type method'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver