arxivst stuff from arxiv that you should probably bookmark

MAGAN, Better than BEGAN

Post · Apr 13, 2017 15:55 ·

MAGAN MNIST CIFAR-10 CelebA state-of-the-art BEGAN cs-lg stat-ml

What’s better than BEGAN? MAGAN! Or at least that’s what the authors of this paper are saying. They are probably right because a) they have a simpler setup and b) they use public datasets so their work is reproducible. Plus they are promising to release the code on Github in the near future.

Highlights From the Paper

  • Achieve improvements over the state-of-the-art results on MNIST, CIFAR-10 and CelebA datasets.
  • The method requires no other techniques such as batch normalization , or layer-wise noises to help with the training.
  • We use a deep convolutional generator analogous to DCGAN’s for all experiments. For discriminators, we use a fully-connected auto-encoder for MNIST dataset to prevent overfitting, and a fully convolutional one for CIFAR-10 and CelebA datasets.
  • With the exception of Denoising Feature Matching (DFM), our method outperforms all other methods…. Further, DFM may be compatible with our framework and is a possible direction for future investigations.

More Highlights

Table 1 - Inception scores: MEGAN - 5.67 BEGAN - 5.62

Datasets

MNIST CIFAR-10 CelebA

Arxiv Abstract

  • Ruohan Wang
  • Antoine Cully
  • Hyung Jin Chang
  • Yiannis Demiris

We propose a novel training procedure for Generative Adversarial Networks (GANs) to improve stability and performance by using an adaptive hinge loss objective function. We estimate the appropriate hinge loss margin with the expected energy of the target distribution, and derive both a principled criterion for updating the margin and an approximate convergence measure. The resulting training procedure is simple yet robust on a diverse set of datasets. We evaluate the proposed training procedure on the task of unsupervised image generation, noting both qualitative and quantitative performance improvements.

Read the paper (pdf) »