Words are hard. It’s one thing to generate a caption for an image (a difficult problem) it’s another to generate human-esque captions (a very difficult problem). Patterns of speech are complex, and the uncanny valley is wide. With all the effort being put in to GANs lately, it was only a matter of time before someone used them to generate better captions. The discriminator compares a set of generated sentences to both the image and each other. The generator produces categorical variables (e.g. words). Combining the two and using a Gumbel Softmax makes it possible to achieve results that are similar to state-of-the-art but with more lingual diversity.
Generate Human-like Image Captions
Post · Mar 31, 2017 15:25 · Share on Twitter
While strong progress has been made in image captioning over the last years, machine and human captions are still quite distinct. A closer look reveals that this is due to the deficiencies in the generated word distribution, vocabulary size, and strong bias in the generators towards frequent captions. Furthermore, humans -- rightfully so -- generate multiple, diverse captions, due to the inherent ambiguity in the captioning task which is not considered in today's systems. To address these challenges, we change the training objective of the caption generator from reproducing groundtruth captions to generating a set of captions that is indistinguishable from human generated captions. Instead of handcrafting such a learning target, we employ adversarial training in combination with an approximate Gumbel sampler to implicitly match the generated distribution to the human one. While our method achieves comparable performance to the state-of-the-art in terms of the correctness of the captions, we generate a set of diverse captions, that are significantly less biased and match the word statistics better in several aspects.