Face-Cap

image captioning using facial expression analysis

Omid Mohamad Nezami*, Mark Dras, Peter Anderson, Leonard Hamey

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contribution

3 Citations (Scopus)

Abstract

Image captioning is the process of generating a natural language description of an image. Most current image captioning models, however, do not take into account the emotional aspect of an image, which is very relevant to activities and interpersonal relationships represented therein. Towards developing a model that can produce human-like captions incorporating these, we use facial expression features extracted from images including human faces, with the aim of improving the descriptive ability of the model. In this work, we present two variants of our Face-Cap model, which embed facial expression features in different ways, to generate image captions. Using all standard evaluation metrics, our Face-Cap models outperform a state-of-the-art baseline model for generating image captions when applied to an image caption dataset extracted from the standard Flickr 30 K dataset, consisting of around 11 K images containing faces. An analysis of the captions finds that, perhaps surprisingly, the improvement in caption quality appears to come not from the addition of adjectives linked to emotional aspects of the images, but from more variety in the actions described in the captions. Code related to this paper is available at: https://github.com/omidmn/Face-Cap.

Original languageEnglish
Title of host publicationMachine Learning and Principles and Practice of Knowledge Discovery in Databases
Subtitle of host publicationEuropean Conference, ECML-PKDD 2018. Proceedings, Part I
EditorsMichele Berlingerio, Francesco Bonchi, Thomas Gärtner, Neil Hurley, Georgiana Ifrim
Place of PublicationCham, Switzerland
PublisherSpringer, Springer Nature
Pages226-240
Number of pages15
ISBN (Electronic)9783030109257
ISBN (Print)9783030109240
DOIs
Publication statusPublished - 2019
EventEuropean Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML-PKDD 2018 - Dublin, Ireland
Duration: 10 Sep 201814 Sep 2018

Publication series

NameLecture Notes in Artificial Intelligence
PublisherSpringer
Volume11051
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceEuropean Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML-PKDD 2018
CountryIreland
CityDublin
Period10/09/1814/09/18

    Fingerprint

Keywords

  • Image captioning
  • Facial expression recognition
  • Sentiment analysis
  • Deep learning

Cite this

Mohamad Nezami, O., Dras, M., Anderson, P., & Hamey, L. (2019). Face-Cap: image captioning using facial expression analysis. In M. Berlingerio, F. Bonchi, T. Gärtner, N. Hurley, & G. Ifrim (Eds.), Machine Learning and Principles and Practice of Knowledge Discovery in Databases: European Conference, ECML-PKDD 2018. Proceedings, Part I (pp. 226-240). (Lecture Notes in Artificial Intelligence; Vol. 11051). Cham, Switzerland: Springer, Springer Nature. https://doi.org/10.1007/978-3-030-10925-7_14, https://doi.org/10.1007/978-3-030-10925-7_14