Asynchronous distributed ADMM for consensus optimization

Ruiliang Zhang, James T. Kwok

Research output: Chapter in Book/Conference Proceeding/ReportConference Paper published in a bookpeer-review

108 Citations (Scopus)

Abstract

Distributed optimization algorithms are highly attractive for solving big data problems. In particular, many machine learning problems can be formulated as the global consensus optimization problem, which can then be solved in a distributed manner by the alternating direction method of multipliers (ADMM) algorithm. However, this suffers from the straggler problem as its updates have to be synchronized. In this paper, we propose an asynchronous ADMM algorithm by using two conditions to control the asynchrony: partial barrier and bounded delay. The proposed algorithm has a simple structure and good convergence guarantees (its convergence rate can be reduced to that of its synchronous counterpart). Experiments on different distributed ADMM applications show that asynchrony reduces the time on network waiting, and achieves faster convergence than its synchronous counterpart in terms of the wall clock time.

Original languageEnglish
Title of host publication31st International Conference on Machine Learning, ICML 2014
PublisherInternational Machine Learning Society (IMLS)
Pages3689-3697
Number of pages9
ISBN (Electronic)9781634393973
Publication statusPublished - 2014
Event31st International Conference on Machine Learning, ICML 2014 - Beijing, China
Duration: 21 Jun 201426 Jun 2014

Publication series

Name31st International Conference on Machine Learning, ICML 2014
Volume5

Conference

Conference31st International Conference on Machine Learning, ICML 2014
Country/TerritoryChina
CityBeijing
Period21/06/1426/06/14

Bibliographical note

Publisher Copyright:
Copyright 2014 by the author(s).

Fingerprint

Dive into the research topics of 'Asynchronous distributed ADMM for consensus optimization'. Together they form a unique fingerprint.

Cite this