MM-NodeFormer: Node Transformer Multimodal Fusion for Emotion Recognition in Conversation

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

3 Citations (Scopus)

Abstract

Emotion Recognition in Conversation (ERC) has great prospects in human-computer interaction and medical consultation. Existing ERC approaches mainly focus on information in the text and speech modalities and often concatenate multimodal features without considering the richness of emotional information in individual modalities. We propose a multimodal network called MM-NodeFormer for ERC to address this issue. The network leverages the characteristics of different Transformer encoding stages to fuse the emotional features from the text, audio, and visual modalities according to their emotional richness. The module considers text as the main modality and audio and visual as auxiliary modalities, leveraging the complementarity between the main and auxiliary modalities. We conducted extensive experiments on two public benchmark datasets, IEMOCAP and MELD, achieving an accuracy of 74.24% and 67.86%, respectively, significantly higher than many state-of-the-art approaches.

Original languageEnglish
Title of host publicationEnglish
Pages4069-4073
Number of pages5
DOIs
Publication statusPublished - Sept 2024
Event25th Interspeech Conferece 2024 - Kos Island, Greece
Duration: 1 Sept 20245 Sept 2024

Publication series

NameProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
PublisherInternational Speech Communication Association
ISSN (Print)2308-457X

Conference

Conference25th Interspeech Conferece 2024
Country/TerritoryGreece
CityKos Island
Period1/09/245/09/24

Keywords

  • emotion recognition in conversation
  • feature fusion
  • multimodal network

ASJC Scopus subject areas

  • Language and Linguistics
  • Human-Computer Interaction
  • Signal Processing
  • Software
  • Modelling and Simulation

Fingerprint

Dive into the research topics of 'MM-NodeFormer: Node Transformer Multimodal Fusion for Emotion Recognition in Conversation'. Together they form a unique fingerprint.

Cite this