Speaker-independent visual speech recognition with the inception v3 model

Timothy Santos, Andrew Abel, Nick Wilson, Yan Xu

    Research output: Chapter in Book/Report/Conference proceedingConference proceeding contribution

    Abstract

    The natural process of understanding speech involves combining auditory and visual cues. CNN based lip reading systems have become very popular in recent years. However, many of these systems consider lipreading to be a black box problem, with limited detailed performance analysis. In this paper, we performed transfer learning by training the Inception v3 CNN model, which has pre-trained weights produced from IMAGENET, with the GRID corpus, delivering good speech recognition results, with 0.61 precision, 0.53 recall, and 0.51F1-score. The lip reading model was able to automatically learn pertinent features, demonstrated using visualisation, and achieve good speaker-independent results. We also identify limitations that match those of humans, therefore limiting potential deep learning performance in real world situations.
    Original languageEnglish
    Title of host publicationProceedings of SLT 2021
    Subtitle of host publicationIEEE Spoken Language Technology Workshop
    PublisherInstitute of Electrical and Electronics Engineers (IEEE)
    Publication statusAccepted/In press - 22 Jan 2021
    EventIEEE Workshop on Spoken Language Technology - Shenzhen, China
    Duration: 19 Jan 202122 Jan 2021
    http://2021.ieeeslt.org/

    Conference

    ConferenceIEEE Workshop on Spoken Language Technology
    Abbreviated titleSLT2021
    CountryChina
    CityShenzhen
    Period19/01/2122/01/21
    Internet address

    Keywords

    • deep learning
    • lip-reading
    • visual speech recognition

    Fingerprint Dive into the research topics of 'Speaker-independent visual speech recognition with the inception v3 model'. Together they form a unique fingerprint.

    Cite this