arxivst stuff from arxiv that you should probably bookmark

Batch-Expansion Training: An Efficient Optimization Paradigm for Machine Learning

Abstract · Apr 22, 2017 01:26 ·

batches corporation batch expansion update cruz stochastic disk bet cs-lg

Arxiv Abstract

  • Michal Derezinski
  • Dhruv Mahajan
  • S. Sathiya Keerthi
  • S. V. N. Vishwanathan
  • Markus Weimer

We propose Batch-Expansion Training (BET), a framework for running a batch optimizer on a gradually expanding dataset. As opposed to stochastic approaches, batches do not need to be resampled i.i.d. at every iteration, thus making BET more resource efficient in a distributed setting, and when disk-access is constrained. Moreover, BET can be easily paired with most batch optimizers, does not require any parameter-tuning, and compares favorably to existing stochastic and batch methods. We show that when the batch size grows exponentially with the number of outer iterations, BET achieves optimal $\tilde{O}(1/\epsilon)$ data-access convergence rate for strongly convex objectives.

Read the paper (pdf) »