What’s better than BEGAN? MAGAN! Or at least that’s what the authors of this paper are saying. They are probably right because a) they have a simpler setup and b) they use public datasets so their work is reproducible. Plus they are promising to release the code on Github in the near future.
Highlights From the Paper
- Achieve improvements over the state-of-the-art results on MNIST, CIFAR-10 and CelebA datasets.
- The method requires no other techniques such as batch normalization , or layer-wise noises to help with the training.
- We use a deep convolutional generator analogous to DCGAN’s for all experiments. For discriminators, we use a fully-connected auto-encoder for MNIST dataset to prevent overfitting, and a fully convolutional one for CIFAR-10 and CelebA datasets.
- With the exception of Denoising Feature Matching (DFM), our method outperforms all other methods…. Further, DFM may be compatible with our framework and is a possible direction for future investigations.
Table 1 - Inception scores: MEGAN - 5.67 BEGAN - 5.62