arxivst stuff from arxiv that you should probably bookmark

Sparse Communication for Distributed Gradient Descent

Abstract · Apr 17, 2017 16:32 ·

coefficient pull skewed zero internal values updates push gradients matrices cs-cl cs-dc cs-lg

Arxiv Abstract

  • Alham Fikri Aji
  • Kenneth Heafield

We make distributed stochastic gradient descent faster by exchanging 99% sparse updates instead of dense updates. In data-parallel training, nodes pull updated values of the parameters from a sharded server, compute gradients, push their gradients to the server, and repeat. These push and pull updates strain the network. However, most updates are near zero, so we map the 99% smallest updates (by absolute value) to zero then exchange sparse matrices. Even simple coordinate and value encoding achieves 50x reduction in bandwidth. Our experiment with a neural machine translation on 4 GPUs achieved a 22% speed boost without impacting BLEU score.

Read the paper (pdf) »