arxivst stuff from arxiv that you should probably bookmark

Interpretable Explanations of Black Boxes by Meaningful Perturbation

Abstract · Apr 11, 2017 14:15 ·

black box explanations predictors explain saliency cs-cv cs-ai cs-lg stat-ml

Arxiv Abstract

  • Ruth Fong
  • Andrea Vedaldi

As machine learning algorithms are increasingly applied to high impact yet high risk tasks, e.g. problems in health, it is critical that researchers can explain how such algorithms arrived at their predictions. In recent years, a number of image saliency methods have been developed to summarize where highly complex neural networks “look” in an image for evidence for their predictions. However, these techniques are limited by their heuristic nature and architectural constraints. In this paper, we make two main contributions: First, we propose a general framework for learning different kinds of explanations for any black box algorithm. Second, we introduce a paradigm that learns the minimally salient part of an image by directly editing it and learning from the corresponding changes to its output. Unlike previous works, our method is model-agnostic and testable because it is grounded in replicable image perturbations.

Read the paper (pdf) »