GANs just keep getting better and better. Last Friday, researchers from Google posted some really solid results using a pretty simple (for GANs) model architecture. They put an auto-encoder in the discriminator and combined it with a training procedure built on Wasserstein GANs. It allows you to control the dial on diversity vs. realism in the generated images. Oh, and the results look great too.
Auto-Encode Your Way to Realistic Images
Post · Apr 3, 2017 16:46 · Share on Twitter
We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks. This method balances the generator and discriminator during training. Additionally, it provides a new approximate convergence measure, fast and stable training and high visual quality. We also derive a way of controlling the trade-off between image diversity and visual quality. We focus on the image generation task, setting a new milestone in visual quality, even at higher resolutions. This is achieved while using a relatively simple model architecture and a standard training procedure.