Style-aware two-stage learning framework for video captioning

Yunchuan Ma, Zheng Zhu, Yuankai Qi, Amin Beheshti, Ying Li, Laiyun Qing*, Guorong Li*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

6 Downloads (Pure)

Abstract

Significant progress has been made in video captioning in recent years. However, most existing methods directly learn from all given captions without distinguishing the styles of captions. The large diversity in these captions might bring ambiguity to the model learning. To address this issue, we propose a style-aware two-stage learning framework. In the first stage, the model is trained with captions of separate styles, including length style (short, medium, long), action style (single action or multiple actions), and object style (one object or more). For efficiency, a shared model with multiple individual style vectors is learned. In the second stage, a video style encoder is devised to capture style information from the input video, and it outputs a guidance signal of how to utilize the style vectors for the final caption generation. Without whistles and bells, our method achieves state-of-the-art performance on three widely-used public datasets, MSVD, MSR-VTT and VATEX. The source code and trained models will be made available to the public.

Original languageEnglish
Article number112258
Pages (from-to)1-11
Number of pages11
JournalKnowledge-Based Systems
Volume301
DOIs
Publication statusPublished - 9 Oct 2024

Bibliographical note

Copyright the Author(s) 2024. Version archived for private and non-commercial use with the permission of the author/s and according to publisher conditions. For further rights please contact the publisher.

Keywords

  • Video captioning
  • Controllable
  • Style-aware
  • Two-stage learning

Cite this