Improving speech emotion recognition based on acoustic words emotion dictionary

Wang Wei, Xinyi Cao, He Li, Lingjie Shen, Yaqin Feng, Paul A. Watters*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)


To improve speech emotion recognition, a U-acoustic words emotion dictionary (AWED) features model is proposed based on an AWED. The method models emotional information from acoustic words level in different emotion classes. The top-list words in each emotion are selected to generate the AWED vector. Then, the U-AWED model is constructed by combining utterance-level acoustic features with the AWED features. Support vector machine and convolutional neural network are employed as the classifiers in our experiment. The results show that our proposed method in four tasks of emotion classification all provides significant improvement in unweighted average recall.

Original languageEnglish
Pages (from-to)747-761
Number of pages15
JournalNatural Language Engineering
Issue number6
Early online date10 Jun 2020
Publication statusPublished - 10 Nov 2021
Externally publishedYes


  • Deep learning
  • Emotion dictionary
  • Speech emotion recognition
  • Support vector machine


Dive into the research topics of 'Improving speech emotion recognition based on acoustic words emotion dictionary'. Together they form a unique fingerprint.

Cite this