Modelling audiovisual translation of non-fiction videos: a multimodal approach to subtitling

Li Pan, Sixin Liao

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review


Audiovisual translation (AVT) has been growing rapidly along with the rise of streaming media and the digitization of information. As a popular form of AVT, subtitling featuring low cost and speedy production is dominantly used in informative videos disseminated on streaming media, especially those representing realities, such as documentaries, online interviews, and news reports. While non-fiction clips usually pose challenges for subtitling practice due to high information density and fast speech rate, they have not received sufficient attention in the field of translation studies. Drawing on Kress and van Leeuwen's Visual Grammar Theory and Bednarek and Caple's framework of Discourse News Value Analysis, the chapter proposes a theoretical framework for multimodal analysis of information values (MAIV) in subtitling non-fiction videos. Guided by the principle of information-value priority, the proposed framework is applied to several case studies that demonstrate how multimodal analysis of information values can inform the selection of appropriate subtitling strategies. This new framework is expected to address the inherent challenges posed by time and space constraints when dealing with all kinds of information-rich and multimodal video clips, in a hope to contribute to effective intercultural audiovisual communication in the evolving landscape of new media.
Original languageEnglish
Title of host publicationMultimodality in translation studies
Subtitle of host publicationmedia, models, and trends in China
EditorsLi Pan, Xiaoping Wu, Tian Luo, Hong Qian
Place of PublicationAbingdon, Oxon
Number of pages14
ISBN (Electronic)9781032650975
ISBN (Print)9781032646176, 9781032650999
Publication statusPublished - 2024


Dive into the research topics of 'Modelling audiovisual translation of non-fiction videos: a multimodal approach to subtitling'. Together they form a unique fingerprint.

Cite this