Improving speech emotion recognition based on acoustic words emotion dictionary

Wang Wei, Xinyi Cao, He Li, Lingjie Shen, Yaqin Feng, Paul A. Watters*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

To improve speech emotion recognition, a U-acoustic words emotion dictionary (AWED) features model is proposed based on an AWED. The method models emotional information from acoustic words level in different emotion classes. The top-list words in each emotion are selected to generate the AWED vector. Then, the U-AWED model is constructed by combining utterance-level acoustic features with the AWED features. Support vector machine and convolutional neural network are employed as the classifiers in our experiment. The results show that our proposed method in four tasks of emotion classification all provides significant improvement in unweighted average recall.

Original languageEnglish
Number of pages15
JournalNatural Language Engineering
DOIs
Publication statusE-pub ahead of print - 10 Jun 2020
Externally publishedYes

Keywords

  • Deep learning
  • Emotion dictionary
  • Speech emotion recognition
  • Support vector machine

Fingerprint Dive into the research topics of 'Improving speech emotion recognition based on acoustic words emotion dictionary'. Together they form a unique fingerprint.

Cite this