arxivst stuff from arxiv that you should probably bookmark

Image Captioning Architectures—Best Practices

Post · Mar 28, 2017 20:17 ·

image captioning

When creating a network to caption an image, feeding in the image features is an important step known as ‘binding’. Early binding results in a network that mixes image and language information, whereas late binding maintains a separation between the linguistic string and the image vector until just before prediction. This study investigates binding systematically and shows that adding image features as the last “word” of the caption prefix (late binding) results in the best outcome.

Arxiv Abstract

  • Marc Tanti
  • Albert Gatt
  • Kenneth P. Camilleri

When a neural language model is used for caption generation, the image information can be fed to the neural network either by directly incorporating it in a recurrent neural network -- conditioning the language model by injecting image features -- or in a layer following the recurrent neural network -- conditioning the language model by merging the image features. While merging implies that visual features are bound at the end of the caption generation process, injecting can bind the visual features at a variety stages. In this paper we empirically show that late binding is superior to early binding in terms of different evaluation metrics. This suggests that the different modalities (visual and linguistic) for caption generation should not be jointly encoded by the RNN; rather, the multimodal integration should be delayed to a subsequent stage. Furthermore, this suggests that recurrent neural networks should not be viewed as actually generating text, but only as encoding it for prediction in a subsequent layer.

Read the paper (pdf) »