nocaps

novel object captioning at scale

Harsh Agrawal, Karan Desai, Yufei Wang, Xinlei Chen, Rishabh Jain, Mark Johnson, Dhruv Batra, Devi Parikh, Stefan Lee, Peter Anderson

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contribution

Abstract

Image captioning models have achieved impressive results on datasets containing limited visual concepts and large amounts of paired image-caption training data. However, if these models are to ever function in the wild, a much larger variety of visual concepts must be learned, ideally from less supervision. To encourage the development of image captioning models that can learn visual concepts from alternative data sources, such as object detection datasets, we present the first large-scale benchmark for this task. Dubbed 'nocaps', for novel object captioning at scale, our benchmark consists of 166,100 human-generated captions describing 15,100 images from the OpenImages validation and test sets. The associated training data consists of COCO image-caption pairs, plus OpenImages image-level labels and object bounding boxes. Since OpenImages contains many more classes than COCO, nearly 400 object classes seen in test images have no or very few associated training captions (hence, nocaps). We extend existing novel object captioning models to establish strong baselines for this benchmark and provide analysis to guide future work on this task.
Original languageEnglish
Title of host publicationInternational Conference on Computer Vision 2019
Subtitle of host publicationICCV 2019
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Pages8948-8957
Number of pages10
Publication statusPublished - 2019
EventInternational Conference on Computer Vision (2019) - Seoul, Korea, Republic of
Duration: 27 Oct 20192 Nov 2019

Conference

ConferenceInternational Conference on Computer Vision (2019)
Abbreviated titleICCV 2019
CountryKorea, Republic of
CitySeoul
Period27/10/192/11/19

Fingerprint Dive into the research topics of 'nocaps: novel object captioning at scale'. Together they form a unique fingerprint.

  • Cite this

    Agrawal, H., Desai, K., Wang, Y., Chen, X., Jain, R., Johnson, M., ... Anderson, P. (2019). nocaps: novel object captioning at scale. In International Conference on Computer Vision 2019: ICCV 2019 (pp. 8948-8957). Institute of Electrical and Electronics Engineers (IEEE).