MixedEmotions: an open-source toolbox for multimodal emotion analysis

Paul Buitelaar, Ian D. Wood, Sapna Negi, Mihael Arcan, John P. McCrae, Andrejs Abele, Cécile Robin, Vladimir Andryushechkin, Housam Ziad, Hesam Sagha*, Maximilian Schmitt, Björn W. Schuller, J. Fernando Sánchez-Rada, Carlos A. Iglesias, Carlos Navarro, Andreas Giefer, Nicolaus Heise, Vincenzo Masucci, Francesco A. Danza, Ciro CaterinoPavel Smrž, Michal Hradis, Filip Povolný, Marek Klimeś, Pavel Matějka, Giovanni Tummarello

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

35 Citations (Scopus)

Abstract

Recently, there is an increasing tendency to embed functionalities for recognizing emotions from user-generated media content in automated systems such as call-centre operations, recommendations, and assistive technologies, providing richer and more informative user and content profiles. However, to date, adding these functionalities was a tedious, costly, and time-consuming effort, requiring identification and integration of diverse tools with diverse interfaces as required by the use case at hand. The MixedEmotions Toolbox leverages the need for such functionalities by providing tools for text, audio, video, and linked data processing within an easily integrable plug-and-play platform. These functionalities include: 1) for text processing: emotion and sentiment recognition; 2) for audio processing: emotion, age, and gender recognition; 3) for video processing: face detection and tracking, emotion recognition, facial landmark localization, head pose estimation, face alignment, and body pose estimation; and 4) for linked data: knowledge graph integration. Moreover, the MixedEmotions Toolbox is open-source and free. In this paper, we present this toolbox in the context of the existing landscape, and provide a range of detailed benchmarks on standard test-beds showing its state-of-the-art performance. Furthermore, three real-world use cases show its effectiveness, namely, emotion-driven smart TV, call center monitoring, and brand reputation analysis.

Original languageEnglish
Pages (from-to)2454-2465
Number of pages12
JournalIEEE Transactions on Multimedia
Volume20
Issue number9
Early online date25 Jan 2018
DOIs
Publication statusPublished - Sept 2018
Externally publishedYes

Keywords

  • affective computing
  • audio processing
  • Emotion analysis
  • linked data
  • open source toolbox
  • text processing
  • video processing

Fingerprint

Dive into the research topics of 'MixedEmotions: an open-source toolbox for multimodal emotion analysis'. Together they form a unique fingerprint.

Cite this