simNet: Stepwise Image-Topic Merging Network for Generating Detailed and Comprehensive Image Captions

Abstract

The encode-decoder framework has shown recent success in image captioning. Visual attention, which is good at detailedness, and semantic attention, which is good at comprehensiveness, have been separately proposed to ground the caption on the image. In this paper, we propose the Stepwise Image-Topic Merging Network (simNet) that makes use of the two kinds of attention at the same time. At each time step when generating the caption, the decoder adaptively merges the attentive information in the extracted topics and the image according to the generated context, so that the visual information and the semantic information can be effectively combined. The proposed approach is evaluated on two benchmark datasets and reaches the state-of-the-art performances.

Publication
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018
Xuancheng Ren
Xuancheng Ren

My research interests include distributed robotics, mobile computing and programmable matter.