Acknowledgement
This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. NRF-2022R1F1A1074696).
References
- K. Papineni, S. Roukos, T. Ward, and W. J. Zhu, "Bleu: a Method for Automatic Evaluation of Machine Translation," in Proceedings of 40th annual meeting of the Association for Computational Linguistics, Philadelphia: PA, USA, pp. 311-318, 2002.
- S. Banerjee and A. Lavie, "METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments," in Proceedings of ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, Michigan: MI, USA, pp. 65-72, 2005.
- C. Lin, "ROUGE: A Package for Automatic Evaluation of Summaries," in Proceedings of Text summarization branches out, Barcelona, Spain, pp. 74-81, 2004.
- P. Anderson, B. Fernando, M. Johnson, and S. Gould, "SPICE: Semantic Propositional Image Caption Evaluation," in Proceedings of European conference on computer vision. Springer, Amsterdam, The Netherlands, pp. 382-398, 2016.
- R. Vedantam, C. L. Zitnick, and D. Parikh, "CIDEr: Consensus-Based Image Description Evaluation," in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Boston: MA, USA, pp. 4566-4575, 2015.
- A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, "Attention Is All You Need," in Proceedings of neural information processing systems, Long Beach: CA, USA, vol. 30, 2017.
- J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, "Bert: Pre-training of Deep Bidirectional Transformers for Language Understanding," arXiv preprint arXiv:1810.04805, 2018.
- O. Sidorov, R. Hu, M. Rohrbach, and A. Singh, "TextCaps: A Dataset for Image Captioning with Reading Comprehension," in Proceedings of European Conference on Computer Vision, Glasgow, UK, pp. 742-758, 2020.
- X. Chen, H. Fang, T. Y. Lin, R. Vedantam, S. Gupta, P. Dollar, and C. L. Zitnick, "Microsoft COCO Captions: Data Collection and Evaluation Server," arXiv preprint arXiv:1504.00325. 2015.
- A. Singh, V. Natarajan, M. Shah, Y. Jiang, X. Chen, D. Batra, D. Parikh, and M. Rohrbach, "Towards VQA Models That Can Read," in Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach: CA, USA, pp. 8317-8326, 2019.
- D. Kiela, H. Firooz, A. Mohan, V. Goswami, A. Singh, P. Ringshia, and D. Testuggine, "The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes," in Advances in Neural Information Processing Systems, Vancouver, Canada, vol. 33, pp. 2611-2624, 2020.
- X. Hu, X. Yin, K. Lin, L. Wang, L. Zhang, J. Gao, and Z. Liu, "VICO: Visual Vocabulary Pre-Training for Novel Object Captioning," arXiv:2009.13682, 2020.
- P. Young, A. Lai, M. Hodosh, and J. Hockenmaier, "From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions," Transactions of the Association for Computational Linguistics, vol. 2, pp. 67-78, Feb. 2014. https://doi.org/10.1162/tacl_a_00166
- H. Agrawal, K. Desai, Y. Wang, X. Chen, R. Jain, M. Johnson, D. Batra, D. Parikh, S. Lee and P. Anderson, "nocaps: novel object captioning at scale." in Proceedings of IEEE/CVF International Conference on Computer Vision, Seoul, KR, pp. 8948-8957, 2019.
- P. Zhang, X. Li, X. Hu, J. Yang, L. Zhang, L. Wang, Y. Choi, and J. Gao, "Vinvl: Revisiting Visual Representations in Vision-Language Models," in Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, pp. 5579-5588, 2021.
- P. Sharma, N. Ding, S. Goodman, and R. Soricut, "Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning," in Proceedings of 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Australia, vol. 1, pp. 2556-2565, 2018.
- V. Ordonez, G. Kulkarni, and T. Berg, "Im2text: Describing Images Using 1 Million Captioned Photographs," in Proceedings of Advances in neural information processing systems, Virtual,vol. 24, pp. 1143-1151, 2011.
- X. Li, X. Yin, C. Li, P. Zhang, X. Hu, L. Zhang, L. Wang, H. Hu, L. Dong, F. Wei, Y. Choi, and J. Gao, "Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks," in Proceedings of European Conference on Computer Vision, Glasgow, UK, pp. 121-137, 2020.
- R. Luo and G. Shakhnarovich, "Controlling Length in Image Captioning," arXiv preprint arXiv:2005.14386, 2020.
- D. Yu, X. Li, C. Zhang, T. Liu, J. Han, J. Liu, and E. Ding, "Towards Accurate Scene Text Recognition with Semantic Reasoning Networks," in Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, pp. 12113-12122, 2020.
- B. Shi, X. Bai, and C. Yao, "An End-to-End Trainable Neural Network for Image-Based Sequence Recognition and Its Application to Scene Text Recognition," IEEE transactions on pattern analysis and machine intelligence, vol. 39, no. 11, pp. 2298-2304, Nov. 2017. https://doi.org/10.1109/TPAMI.2016.2646371
- Y. Du, C. Li, R. Guo, X. Yin, W. Liu, J. Zhou, Y. Bai, Z. Yu, Y. Yang, Q. Dang, and H. Wang, "PP-OCR: A Practical Ultra Lightweight OCR System," arXiv preprint arXiv:2009.09941, 2020.
- D. A. Hudson and C. D. Manning, "GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question Answering," in Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach: CA, USA, pp. 6700-6709, 2019.
- S. J. Rennie, E. Marcheret, Y. Mroueh, J. Ross, and V. Goel, "Self-Critical Sequence Training for Image Captioning," in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Honolulu: HI, USA, pp. 7008-7024, 2017.