arxivst stuff from arxiv that you should probably bookmark

Visual Question Answering, an In-Depth Survey

Post · Mar 29, 2017 17:13 ·

VQA survey

This is a solid survey of visual question and answer (VQA) algorithms. It explores the creation of VQA datasets, gives an overview of current state-of-the-art, and provides a comprehensive analysis of the methods used for various sub-tasks. The authors begin with a simple multi-layer perceptron and build up to various state-of-the-art architectures. They also review the common pitfalls in the VQA problem domain.

Arxiv Abstract

  • Kushal Kafle
  • Christopher Kanan

In visual question answering (VQA), an algorithm must answer text-based questions about images. While multiple datasets for VQA have been created since late 2014, they all have flaws in both their content and the way algorithms are evaluated on them. As a result, evaluation scores are inflated and predominantly determined by answering easier questions, making it difficult to compare different methods. In this paper, we analyze existing VQA algorithms using a new dataset. It contains over 1.6 million questions organized into 12 different categories. We also introduce questions that are meaningless for a given image to force a VQA system to reason about image content. We propose new evaluation schemes that compensate for over-represented question-types and make it easier to study the strengths and weaknesses of algorithms. We analyze the performance of both baseline and state-of-the-art VQA models, including multi-modal compact bilinear pooling (MCB), neural module networks, and recurrent answering units. Our experiments establish how attention helps certain categories more than others, determine which models work better than others, and explain how simple models (e.g. MLP) can surpass more complex models (MCB) by simply learning to answer large, easy question categories.

Read the paper (pdf) »