Abstract
Multimodal sentiment analysis faces significant challenges in real-world applications due to the frequent absence of modalities caused by privacy concerns, device limitations, or security policies. This article introduces a random modality dropout based on generative approach (RMDG), designed to enhance the robustness and performance of multimodal models under various modality absence scenarios. The RMDG method employs a generative approach during the training phase, where random modality dropout is applied to simulate missing modalities. By leveraging the remaining modalities to predict and regenerate the key features of the missing ones, the model effectively adapts to dynamic and unpredictable modality absences. This strategy not only eliminates the need for separate training or adjustments for each modality combination but also significantly improves the efficiency and accuracy of sentiment analysis in incomplete multimodal data scenarios. Extensive experiments demonstrate that RMDG outperforms existing methods, achieving superior performance in both complete and missing modality conditions.
| Original language | English |
|---|---|
| Pages (from-to) | 62-69 |
| Number of pages | 8 |
| Journal | IEEE Intelligent Systems |
| Volume | 40 |
| Issue number | 5 |
| DOIs | |
| Publication status | Published - 2025 |
Fingerprint
Dive into the research topics of 'A generative random modality dropout framework for robust multimodal emotion recognition'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver