Distributed Asynchronous Optimization with Unbounded Delays: How Slow Can You Go? - INRIA - Institut National de Recherche en Informatique et en Automatique Accéder directement au contenu
Communication Dans Un Congrès Année : 2018

Distributed Asynchronous Optimization with Unbounded Delays: How Slow Can You Go?

Résumé

One of the most widely used training methods for large-scale machine learning problems is distributed asynchronous stochastic gradient descent (DASGD). However, a key issue in its implementation is that of delays: when a "worker" node asynchronously contributes a gradient update to the "master", the global model parameter may have changed, rendering this information stale. In massively parallel computing grids, these delays can quickly add up if a node is saturated, so the convergence of DASGD is uncertain under these conditions. Nevertheless, by using a judiciously chosen quasilinear step-size sequence, we show that it is possible to amortize these delays and achieve global convergence with probability 1, even under polynomially growing delays, reaffirming in this way the successful application of DASGD to large-scale optimization problems.
Fichier principal
Vignette du fichier
Delayed-ICML.pdf (706.96 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01891449 , version 1 (09-10-2018)

Identifiants

  • HAL Id : hal-01891449 , version 1

Citer

Zhengyuan Zhou, Panayotis Mertikopoulos, Nicholas Bambos, Peter W. Glynn, Yinyu Ye, et al.. Distributed Asynchronous Optimization with Unbounded Delays: How Slow Can You Go?. ICML 2018 - 35th International Conference on Machine Learning, Jul 2018, Stockholm, Sweden. pp.1-10. ⟨hal-01891449⟩
238 Consultations
279 Téléchargements

Partager

Gmail Facebook X LinkedIn More