arxivst stuff from arxiv that you should probably bookmark

Virtual Adversarial Training: a Regularization Method for Supervised and Semi-supervised Learning

Abstract · Apr 13, 2017 02:45 ·

direction perturbation around point kyoto regularization adversarial isotropic smoothing vat stat-ml cs-lg

Arxiv Abstract

  • Takeru Miyato
  • Shin-ichi Maeda
  • Masanori Koyama
  • Shin Ishii

We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the output distribution. Virtual adversarial loss is defined as the robustness of the model’s posterior distribution against local perturbation around each input data point. Our method is similar to adversarial training, but differs from adversarial training in that it determines the adversarial direction based only on the output distribution and that it is applicable to a semi-supervised setting. Because the directions in which we smooth the model are virtually adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward and backpropagations. In our experiments, we applied VAT to supervised and semi-supervised learning on multiple benchmark datasets. With additional improvement based on entropy minimization principle, our VAT achieves the state-of-the-art performance on SVHN and CIFAR-10 for semi-supervised learning tasks.

Read the paper (pdf) »