arxivst stuff from arxiv that you should probably bookmark

Improved Training of Wasserstein GANs

Abstract · Mar 31, 2017 19:25 ·

cs-lg stat-ml

Arxiv Abstract

  • Ishaan Gulrajani
  • Faruk Ahmed
  • Martin Arjovsky
  • Vincent Dumoulin
  • Aaron Courville

Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes significant progress toward stable training of GANs, but can still generate low-quality samples or fail to converge in some settings. We find that these training failures are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to pathological behavior. We propose an alternative method for enforcing the Lipschitz constraint: instead of clipping weights, penalize the norm of the gradient of the critic with respect to its input. Our proposed method converges faster and generates higher-quality samples than WGAN with weight clipping. Finally, our method enables very stable GAN training: for the first time, we can train a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data.

Read the paper (pdf) »