TY - GEN
T1 - nocaps
T2 - International Conference on Computer Vision (2019)
AU - Agrawal, Harsh
AU - Desai, Karan
AU - Wang, Yufei
AU - Chen, Xinlei
AU - Jain, Rishabh
AU - Johnson, Mark
AU - Batra, Dhruv
AU - Parikh, Devi
AU - Lee, Stefan
AU - Anderson, Peter
PY - 2019
Y1 - 2019
N2 - Image captioning models have achieved impressive results on datasets containing limited visual concepts and large amounts of paired image-caption training data. However, if these models are to ever function in the wild, a much larger variety of visual concepts must be learned, ideally from less supervision. To encourage the development of image captioning models that can learn visual concepts from alternative data sources, such as object detection datasets, we present the first large-scale benchmark for this task. Dubbed 'nocaps', for novel object captioning at scale, our benchmark consists of 166,100 human-generated captions describing 15,100 images from the Open Images validation and test sets. The associated training data consists of COCO image-caption pairs, plus Open Images image-level labels and object bounding boxes. Since Open Images contains many more classes than COCO, nearly 400 object classes seen in test images have no or very few associated training captions (hence, nocaps). We extend existing novel object captioning models to establish strong baselines for this benchmark and provide analysis to guide future work.
AB - Image captioning models have achieved impressive results on datasets containing limited visual concepts and large amounts of paired image-caption training data. However, if these models are to ever function in the wild, a much larger variety of visual concepts must be learned, ideally from less supervision. To encourage the development of image captioning models that can learn visual concepts from alternative data sources, such as object detection datasets, we present the first large-scale benchmark for this task. Dubbed 'nocaps', for novel object captioning at scale, our benchmark consists of 166,100 human-generated captions describing 15,100 images from the Open Images validation and test sets. The associated training data consists of COCO image-caption pairs, plus Open Images image-level labels and object bounding boxes. Since Open Images contains many more classes than COCO, nearly 400 object classes seen in test images have no or very few associated training captions (hence, nocaps). We extend existing novel object captioning models to establish strong baselines for this benchmark and provide analysis to guide future work.
UR - http://openaccess.thecvf.com/ICCV2019.py
UR - http://www.scopus.com/inward/record.url?scp=85081924526&partnerID=8YFLogxK
U2 - 10.1109/ICCV.2019.00904
DO - 10.1109/ICCV.2019.00904
M3 - Conference proceeding contribution
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 8947
EP - 8956
BT - Proceedings - 2019 International Conference on Computer Vision, ICCV 2019
PB - Institute of Electrical and Electronics Engineers (IEEE)
CY - Piscataway, NJ
Y2 - 27 October 2019 through 2 November 2019
ER -