Image captioning using facial expression and attention

Omid Mohamad Nezami, Mark Dras, Stephen Wan, Cécile Paris

Research output: Contribution to journalArticlepeer-review

Abstract

Benefiting from advances in machine vision and natural language processing techniques, current image captioning systems are able to generate detailed visual descriptions. For the most part, these descriptions represent an objective characterisation of the image, although some models do incorporate subjective aspects related to the observer's view of the image, such as sentiment; current models, however, usually do not consider the emotional content of images during the caption generation process. This paper addresses this issue by proposing novel image captioning models which use facial expression features to generate image captions. The models generate image captions using long short-term memory networks applying facial features in addition to other visual features at different time steps. We compare a comprehensive collection of image captioning models with and without facial features using all standard evaluation metrics. The evaluation metrics indicate that applying facial features with an attention mechanism achieves the best performance, showing more expressive and more correlated image captions, on an image caption dataset extracted from the standard Flickr 30K dataset, consisting of around 11K images containing faces. An analysis of the generated captions finds that, perhaps unexpectedly, the improvement in caption quality appears to come not from the addition of adjectives linked to emotional aspects of the images, but from more variety in the actions described in the captions.

Original languageEnglish
Pages (from-to)661-689
Number of pages29
JournalJournal of Artificial Intelligence Research
Volume68
DOIs
Publication statusPublished - Aug 2020

Keywords

  • natural language
  • vision
  • neural networks

Fingerprint

Dive into the research topics of 'Image captioning using facial expression and attention'. Together they form a unique fingerprint.

Cite this