arxivst stuff from arxiv that you should probably bookmark

DualGAN: Unsupervised Dual Learning for Image-to-Image Translation

Abstract · Apr 8, 2017 16:13 ·

dual gan translators translation french conditional image labeled primal english cs-cv

Arxiv Abstract

  • Zili Yi
  • Hao Zhang
  • Ping Tan. Minglun Gong

Using conditional Generative Adversarial Network (conditional GAN) for cross-domain image-to-image translation has achieved significant improvements in the past year. Depending on the degree of task complexity, thousands or even millions of labeled image pairs are needed to train conditional GANs. However, human labeling is very expensive and sometimes impractical. Inspired by the success of dual learning paradigm in natural language translation, we develop a novel dual-GAN mechanism, which enables image translators to be trained from two sets of unlabeled images each representing a domain. In our architecture, the primal GAN learns to translate images from domain $U$ to those in domain $V$, while the dual GAN learns to convert images from $V$ to $U$. The closed loop made by the primal and dual tasks allows images from either domain to be translated and then reconstructed. Hence a loss function that accounts for the reconstruction error of images can be used to train the translation models. Experiments on multiple image translation tasks with unlabeled data show considerable performance gain of our dual-GAN architecture over a single GAN. For some tasks, our model can even achieve comparable or slightly better results to conditional GAN trained on fully labeled data.

Read the paper (pdf) »