arxivst stuff from arxiv that you should probably bookmark

Adversarial and Clean Data Are Not Twins

Abstract · Apr 17, 2017 13:25 ·

fool binary crafting visually kurakin adversarial sign attack clean classifier cs-lg cs-ne

Arxiv Abstract

  • Zhitao Gong
  • Wenlu Wang
  • Wei-Shinn Ku

Adversarial attack has cast a shadow on the massive success of deep neural networks. Despite being almost visually identical to the clean data, the adversarial images can fool deep neural networks into wrong predictions with very high confidence. In this paper, however, we show that we can build a simple binary classifier separating the adversarial apart from the clean data with accuracy over 99%. We also empirically show that the binary classifier is robust to a second-round adversarial attack. In other words, it is difficult to disguise adversarial samples to bypass the binary classifier. Further more, we empirically investigate the generalization limitation which lingers on all current defensive methods, including the binary classifier approach. And we hypothesize that this is the result of intrinsic property of adversarial crafting algorithms.

Read the paper (pdf) »