arxivst stuff from arxiv that you should probably bookmark

Adversarial Attack Prevention

Post · Apr 12, 2017 03:13 ·

transferability attacks adversarial subspaces MNIST DREBIN stat-ml cs-cr cs-lg

New paper out of Stanford, Pennsylvania State and Google Brain delves into what happens when a ML model is attacked, and how to incorporate robustness. It’s an interesting read in an interesting space.

Highlights From the Paper

  • Transferability is a strong obstacle to the secure deployment of ML: an adversary can launch black-box attacks, by using a local surrogate model to craft adversarial examples that also mislead the target model.
  • Our aim here is to evaluate the dimensionality of the adversarial subspace that transfers between models.
  • Similarity between decision boundaries enables transferability of adversarial examples across models.
  • Our measurements show that when moving away from data points in any particular direction, the distance between two models’ decision boundaries is usually smaller than the distance separating the data points from either boundary.

Datasets

Arxiv Abstract

  • Florian Tramèr
  • Nicolas Papernot
  • Ian Goodfellow
  • Dan Boneh
  • Patrick McDaniel

Adversarial examples are maliciously perturbed inputs designed to mislead machine learning (ML) models at test-time. Adversarial examples are known to transfer across models: a same perturbed input is often misclassified by different models despite being generated to mislead a specific architecture. This phenomenon enables simple yet powerful black-box attacks against deployed ML systems. In this work, we propose novel methods for estimating the previously unknown dimensionality of the space of adversarial inputs. We find that adversarial examples span a contiguous subspace of large dimensionality and that a significant fraction of this space is shared between different models, thus enabling transferability. The dimensionality of the transferred adversarial subspace implies that the decision boundaries learned by different models are eerily close in the input domain, when moving away from data points in adversarial directions. A first quantitative analysis of the similarity of different models' decision boundaries reveals that these boundaries are actually close in arbitrary directions, whether adversarial or benign. We conclude with a formal study of the limits of transferability. We show (1) sufficient conditions on the data distribution that imply transferability for simple model classes and (2) examples of tasks for which transferability fails to hold. This suggests the existence of defenses making models robust to transferability attacks---even when the model is not robust to its own adversarial examples.

Read the paper (pdf) »