TY - GEN
T1 - Face-Cap
T2 - European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML-PKDD 2018
AU - Mohamad Nezami, Omid
AU - Dras, Mark
AU - Anderson, Peter
AU - Hamey, Leonard
PY - 2019
Y1 - 2019
N2 - Image captioning is the process of generating a natural language description of an image. Most current image captioning models, however, do not take into account the emotional aspect of an image, which is very relevant to activities and interpersonal relationships represented therein. Towards developing a model that can produce human-like captions incorporating these, we use facial expression features extracted from images including human faces, with the aim of improving the descriptive ability of the model. In this work, we present two variants of our Face-Cap model, which embed facial expression features in different ways, to generate image captions. Using all standard evaluation metrics, our Face-Cap models outperform a state-of-the-art baseline model for generating image captions when applied to an image caption dataset extracted from the standard Flickr 30 K dataset, consisting of around 11 K images containing faces. An analysis of the captions finds that, perhaps surprisingly, the improvement in caption quality appears to come not from the addition of adjectives linked to emotional aspects of the images, but from more variety in the actions described in the captions. Code related to this paper is available at: https://github.com/omidmn/Face-Cap.
AB - Image captioning is the process of generating a natural language description of an image. Most current image captioning models, however, do not take into account the emotional aspect of an image, which is very relevant to activities and interpersonal relationships represented therein. Towards developing a model that can produce human-like captions incorporating these, we use facial expression features extracted from images including human faces, with the aim of improving the descriptive ability of the model. In this work, we present two variants of our Face-Cap model, which embed facial expression features in different ways, to generate image captions. Using all standard evaluation metrics, our Face-Cap models outperform a state-of-the-art baseline model for generating image captions when applied to an image caption dataset extracted from the standard Flickr 30 K dataset, consisting of around 11 K images containing faces. An analysis of the captions finds that, perhaps surprisingly, the improvement in caption quality appears to come not from the addition of adjectives linked to emotional aspects of the images, but from more variety in the actions described in the captions. Code related to this paper is available at: https://github.com/omidmn/Face-Cap.
KW - Image captioning
KW - Facial expression recognition
KW - Sentiment analysis
KW - Deep learning
UR - http://www.scopus.com/inward/record.url?scp=85061138662&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-10925-7_14
DO - 10.1007/978-3-030-10925-7_14
M3 - Conference proceeding contribution
AN - SCOPUS:85061138662
SN - 9783030109240
T3 - Lecture Notes in Artificial Intelligence
SP - 226
EP - 240
BT - Machine Learning and Principles and Practice of Knowledge Discovery in Databases
A2 - Berlingerio, Michele
A2 - Bonchi, Francesco
A2 - Gärtner, Thomas
A2 - Hurley, Neil
A2 - Ifrim, Georgiana
PB - Springer, Springer Nature
CY - Cham, Switzerland
Y2 - 10 September 2018 through 14 September 2018
ER -