SPICE: semantic propositional image caption evaluation

Peter Anderson, Basura Fernando, Mark Johnson, Stephen Gould

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionResearchpeer-review

Abstract

There is considerable interest in the task of automatically generating image captions. However, evaluation is challenging. Existing automatic evaluation metrics are primarily sensitive to n-gram overlap, which is neither necessary nor sufficient for the task of simulating human judgment. We hypothesize that semantic propositional content is an important component of human caption evaluation, and propose a new automated caption evaluation metric defined over scene graphs coined SPICE. Extensive evaluations across a range of models and datasets indicate that SPICE captures human judgments over model-generated captions better than other automatic metrics (e.g., system-level correlation of 0.88 with human judgments on the MS COCO dataset, versus 0.43 for CIDEr and 0.53 for METEOR). Furthermore, SPICE can answer questions such as which caption-generator best understands colors? and can caption-generators count?.

LanguageEnglish
Title of host publicationComputer Vision - 14th European Conference, ECCV 2016, Proceedings
EditorsBastian Leibe, Jiri Matas, Nicu Sebe, Max Welling
Place of PublicationCham, Switzerland
PublisherSpringer, Springer Nature
Pages382-398
Number of pages17
VolumePart V
ISBN (Print)9783319464534
DOIs
Publication statusPublished - 2016
EventEuropean Conference on Computer Vision (14th : 2016) - Amsterdam, Netherlands
Duration: 11 Oct 201614 Oct 2016

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume9909 LNCS
ISSN (Print)03029743
ISSN (Electronic)16113349

Conference

ConferenceEuropean Conference on Computer Vision (14th : 2016)
CountryNetherlands
CityAmsterdam
Period11/10/1614/10/16

Fingerprint

SPICE
Semantics
Evaluation
Metric system
Metric unit
Generator
Metric
N-gram
Color
Overlap
Count
Sufficient
Necessary
Human
Graph in graph theory
Model
Range of data
Judgment

Cite this

Anderson, P., Fernando, B., Johnson, M., & Gould, S. (2016). SPICE: semantic propositional image caption evaluation. In B. Leibe, J. Matas, N. Sebe, & M. Welling (Eds.), Computer Vision - 14th European Conference, ECCV 2016, Proceedings (Vol. Part V, pp. 382-398). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 9909 LNCS). Cham, Switzerland: Springer, Springer Nature. https://doi.org/10.1007/978-3-319-46454-1_24
Anderson, Peter ; Fernando, Basura ; Johnson, Mark ; Gould, Stephen. / SPICE : semantic propositional image caption evaluation. Computer Vision - 14th European Conference, ECCV 2016, Proceedings. editor / Bastian Leibe ; Jiri Matas ; Nicu Sebe ; Max Welling. Vol. Part V Cham, Switzerland : Springer, Springer Nature, 2016. pp. 382-398 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{4364bd7af42045ab9303fb972ca8e22c,
title = "SPICE: semantic propositional image caption evaluation",
abstract = "There is considerable interest in the task of automatically generating image captions. However, evaluation is challenging. Existing automatic evaluation metrics are primarily sensitive to n-gram overlap, which is neither necessary nor sufficient for the task of simulating human judgment. We hypothesize that semantic propositional content is an important component of human caption evaluation, and propose a new automated caption evaluation metric defined over scene graphs coined SPICE. Extensive evaluations across a range of models and datasets indicate that SPICE captures human judgments over model-generated captions better than other automatic metrics (e.g., system-level correlation of 0.88 with human judgments on the MS COCO dataset, versus 0.43 for CIDEr and 0.53 for METEOR). Furthermore, SPICE can answer questions such as which caption-generator best understands colors? and can caption-generators count?.",
author = "Peter Anderson and Basura Fernando and Mark Johnson and Stephen Gould",
year = "2016",
doi = "10.1007/978-3-319-46454-1_24",
language = "English",
isbn = "9783319464534",
volume = "Part V",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer, Springer Nature",
pages = "382--398",
editor = "Bastian Leibe and Jiri Matas and Nicu Sebe and Max Welling",
booktitle = "Computer Vision - 14th European Conference, ECCV 2016, Proceedings",
address = "United States",

}

Anderson, P, Fernando, B, Johnson, M & Gould, S 2016, SPICE: semantic propositional image caption evaluation. in B Leibe, J Matas, N Sebe & M Welling (eds), Computer Vision - 14th European Conference, ECCV 2016, Proceedings. vol. Part V, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9909 LNCS, Springer, Springer Nature, Cham, Switzerland, pp. 382-398, European Conference on Computer Vision (14th : 2016), Amsterdam, Netherlands, 11/10/16. https://doi.org/10.1007/978-3-319-46454-1_24

SPICE : semantic propositional image caption evaluation. / Anderson, Peter; Fernando, Basura; Johnson, Mark; Gould, Stephen.

Computer Vision - 14th European Conference, ECCV 2016, Proceedings. ed. / Bastian Leibe; Jiri Matas; Nicu Sebe; Max Welling. Vol. Part V Cham, Switzerland : Springer, Springer Nature, 2016. p. 382-398 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 9909 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionResearchpeer-review

TY - GEN

T1 - SPICE

T2 - semantic propositional image caption evaluation

AU - Anderson, Peter

AU - Fernando, Basura

AU - Johnson, Mark

AU - Gould, Stephen

PY - 2016

Y1 - 2016

N2 - There is considerable interest in the task of automatically generating image captions. However, evaluation is challenging. Existing automatic evaluation metrics are primarily sensitive to n-gram overlap, which is neither necessary nor sufficient for the task of simulating human judgment. We hypothesize that semantic propositional content is an important component of human caption evaluation, and propose a new automated caption evaluation metric defined over scene graphs coined SPICE. Extensive evaluations across a range of models and datasets indicate that SPICE captures human judgments over model-generated captions better than other automatic metrics (e.g., system-level correlation of 0.88 with human judgments on the MS COCO dataset, versus 0.43 for CIDEr and 0.53 for METEOR). Furthermore, SPICE can answer questions such as which caption-generator best understands colors? and can caption-generators count?.

AB - There is considerable interest in the task of automatically generating image captions. However, evaluation is challenging. Existing automatic evaluation metrics are primarily sensitive to n-gram overlap, which is neither necessary nor sufficient for the task of simulating human judgment. We hypothesize that semantic propositional content is an important component of human caption evaluation, and propose a new automated caption evaluation metric defined over scene graphs coined SPICE. Extensive evaluations across a range of models and datasets indicate that SPICE captures human judgments over model-generated captions better than other automatic metrics (e.g., system-level correlation of 0.88 with human judgments on the MS COCO dataset, versus 0.43 for CIDEr and 0.53 for METEOR). Furthermore, SPICE can answer questions such as which caption-generator best understands colors? and can caption-generators count?.

UR - http://www.scopus.com/inward/record.url?scp=84990036877&partnerID=8YFLogxK

U2 - 10.1007/978-3-319-46454-1_24

DO - 10.1007/978-3-319-46454-1_24

M3 - Conference proceeding contribution

SN - 9783319464534

VL - Part V

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 382

EP - 398

BT - Computer Vision - 14th European Conference, ECCV 2016, Proceedings

A2 - Leibe, Bastian

A2 - Matas, Jiri

A2 - Sebe, Nicu

A2 - Welling, Max

PB - Springer, Springer Nature

CY - Cham, Switzerland

ER -

Anderson P, Fernando B, Johnson M, Gould S. SPICE: semantic propositional image caption evaluation. In Leibe B, Matas J, Sebe N, Welling M, editors, Computer Vision - 14th European Conference, ECCV 2016, Proceedings. Vol. Part V. Cham, Switzerland: Springer, Springer Nature. 2016. p. 382-398. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-319-46454-1_24